SQL Sync portal doesn't see any tables or schemas


  • I was trying out the new SQL Sync fro the new portal. I did the following:

    1. Create a sync group (added a testhub database and a testdata database, both in an azure SQL server. testdata has 1 table)
    2. Created a local Sync agent (win8 machine, added a testdatalocal database. testdatalocal has 2 tables)
    3. turned on sync at 5 min intervals, hub wins
    4. All this time, the sync group is "not ready"
    5. I therefore presume I need to add a sync rule (this isn't clear anywhere, I'm guessing)
    6. Attempted to add a sync rule, I keep getting "This database is empty or its schema has not been refreshed.". 
    7. I get the above on both the testdata as well as testdatalocal databases, even after clicking the refresh schema (and the portal saying "schema refreshed successfully")

    How can we get this working? We haven't really done anything fancy here, basic stuff that I'm sure would have been tested internally.

    A ticket to Azure's official engineers suggested that I try the forums as in 'preview' phase the official support engineers don't work on those technologies.

    Thursday, December 27, 2012 8:20 PM

All replies

  • In my environment, it works well
    Thursday, January 03, 2013 8:42 AM
  • Sid,

    do you have PKs on your tables?

    are your tables using schema names other than dbo? if they do, does the user account you use to connect to the database default to that schema?

    Thursday, January 03, 2013 9:30 AM
  • Yes, I do have PKs on the tables. The schema name is dbo and the user account is the original admin account created when the Azure dB was created. testdata was created by importing our live database into 'testdata' (through MS SQL Mgmt Studio 2012 data tier export). Inside the destination (testdata) I'm able to see everything just fine as the admin login (it's also the only login!) through VS2012 as well as MS SQL management studio. The same login doesn't seem to see anything (either directly on the Azure SQL dB or thru the agent for local SQL)

    What's interesting is that if I create new tables from scratch in testdata, then that table is picked up by MS SQL Mgmt Studio, VS2012 as well as SQL Sync services.

    Friday, January 04, 2013 5:06 AM
  • anything special with your table names? spaces in it, dots, etc...?
    Friday, January 04, 2013 6:06 AM
  • no ... it's just "[dbo].[UserProfile]" (or dbo.UserProfile depending on your style) :/
    Friday, January 04, 2013 7:28 AM
  • Hey Sid,

    Seems this is related to the tables were imported and created through Management Studio data tier export. How about using this following solution? : create a syn group which contains one on-premise db, sql azure hub db (azuredb1), another sql azure db (azuredb2). the on-premise db has all the tables and data which you want to sync to azure db.



    Oliver Yao - MSFT

    Monday, January 07, 2013 2:34 PM
  • Hi Oliver,

    If I setup the tables fresh from scratch (either locally or remotely) then it works fine. However we have actual live tables that we'd like to migrate onto Azure. I presume this is also the use case targeted by the Azure SQL team working on Azure SQL Sync.

    For some reason something in Azure SQL Sync is not right when migrating the tables from our local production database into our local test database through MS SQL Mgmt Studio (even though everything else apart from sync i.e. our application, test s/w etc see the new test database as identical). How does the SQL Azure team test out this scenario? I'm sure they must be (backingup then) restoring a database into a sync group test that scenario. Perhaps I can follow that same methodology if MS SQL Mgmt Studio 2012 doesn't work with Azure Sync.

    Please note that I'm intentionally not directly plugging our local production database into Azure SQL Sync - I can already see Sync creates several additional tables and we want to test drive the system before accepting those additional tables as "schema pollution". That's why we're only doing it on a copy of our database as part of the test.



    • Edited by Sid.S Monday, January 07, 2013 10:58 PM
    Monday, January 07, 2013 10:57 PM
  • sid,

    if you can script out one or two of the problem tables (via Generate script and via data tier export), send the file at junetidlethoughts at Hotmail dot com  and ill see if I can reproduce.



    Tuesday, January 08, 2013 3:03 AM
  • Hi guys,

    I figured this out; mostly a configuration error on our part. For some reason the copy of the database which we were using for testing had clustered index constraints missing on some tables. The same tables in the original database have the PK constraints so why the backup has it missing it intriguing. IIRC, I had used the .bacpac (some similar) file to back it up; but it's strange it would allow me to export to an inconsistent state. Anyway, this time around I used the good old .sql scripts and the PK's came through in the copy. Then SQL Sync was able to see the tables in question.

    So JuneT, I gotta apologize to you - your original hunch on PK's was right and when I had verified, I had connected to the wrong database. When I tried making the backup database to email to you that's when I noticed this when reviewing the .SQL files. Thanks for the offer to review the .sql file - appreciate the gesture but am glad this sorted itself out!

    • Edited by Sid.S Tuesday, January 08, 2013 3:43 AM
    Tuesday, January 08, 2013 3:42 AM
  • Guys,

    Just a quick note that the "sync rules" page in the HTML 5 portal will filter out the "incompatible" tables such as tables without a PK so that you won't accidentally select that table and wait for the provision to fail.


    Thursday, January 10, 2013 3:48 AM
  • Hey Chris,

    It would probably be a user friendly option to list the table but flag it incompatible/unselectable (i.e. user still can't mark it => user expectations aren't broken). Or have a checkbox saying "Display incompatible tables" on the HTML5 portal page.

    I totally get that per the documentation etc, it's the "users fault" but this would a good defensive programming check (plus less support traffic too?).



    Thursday, January 10, 2013 4:02 AM
  • Sid,

    You are probably right. I'll reflect this to the team and see what we can do about this in the next release.

    Thanks for the suggestion!

    Thanks, C

    Thursday, January 10, 2013 5:37 AM