locked
Initial Data Sync failing RRS feed

  • Question

  • We have an existing SQL Azure DB that is fairly straightforward. Some of the tables have ~10 million rows, but the relationships aren't complex. We created a new empty DB in our on-prem SQL 2012 and followed the online wizard to bidirectional sync between the two. The provisioning was quick and painless. Unfortunately, after several hours, the sync failed with the following error:

    Sync failed with the exception "Sync was aborted because more than 1000 changes failed to apply. Examine your table schemas to look for conflicting constraints or incompatible data types that may prevent sync from succeeding."

    For more information, provide tracing id ‘85ccb503-cc5f-4841-8c06-6155c3a2ba93’ to customer support.

    Is there any way to get more detail as to what the errors were?

    Friday, May 4, 2012 9:06 PM

Answers

  • After reviewing the original source DB (the one we exported the bacpac from) it looks like the export did not operate as expected. For the foreign table, we use an identity column as its primary key, which is fine for our sync scenario since we rarely add rows and can handle changes all in one place. However, when importing that table it regenerated the primary keys, so if the original key order were 1,2,5,7 it was now 1,2,3,4. Unfortunately, it didn't update the foreign key references, which still point to 1,2,5,7. The data is all there, but the references are broken and we need to revert to the original data to recover.

    This issue is unrelated to Data Sync, but is a huge gotcha and, in our case, presented as a symptom in sync.

    • Proposed as answer by Han, MSFT Wednesday, May 9, 2012 1:45 AM
    • Marked as answer by Ed Kaim Wednesday, May 9, 2012 1:48 AM
    Tuesday, May 8, 2012 8:12 PM

All replies

  • have a look at:

    Event Viewer->Applications and Services Logs->Sql Azure Data Sync Preview

    or Event Viewer->Applications and Services Logs->Data Sync Service

    you may also set logging in verbose mode.

    Open LocalAgentHost.exe.config in notepad. This file should be present in your installation directory.

    a) Uncomment the section that is currently commented
    < !--
    < switches>
    < add name="SyncAgentTracer" value="4" />
    < /switches>
    < trace autoflush="true" indentsize="4">
    < listeners>
    < add name="myListener" type="Microsoft.SqlAzureDataSync.ClientLogging.DSSClientTraceListener, Microsoft.SqlAzureDataSync.ClientLogging, Version=2.0.0.0" initializeData="DSSAgentOutput.log" />
    < remove name="Default" />
    < /listeners>
    < /trace>
    -->

    b) Stop and restart SQL Azure Data Sync Preview Windows Service.
    Now you would find the detailed logs in files named DSSAgentOutput*.log.

    Friday, May 4, 2012 10:58 PM
    Answerer
  • Thank you for your quick reply. After reviewing the event log, the entries that are not "SyncAbortedDueToLargeErrorCountException" are:

    id:ClientRequestHandler_ExceptionProcessingRequest, rId:, sId:780dfe65-f09b-411b-bae9-671e22db2725, taskId:ca403768-462b-4b88-87d2-e944c977e344, e:'Type=Microsoft.Synchronization.Data.DbSyncException,Message=An unexpected error occurred when applying batch file C:\Users\BTFTES~1\AppData\Local\Temp\DSS_syncjobst34lfqtryhojv5kgyzvrz4b\af958975-9cb5-4597-8857-d983087a7286.batch. See the inner exception for more details.,Source=Microsoft.Synchronization,StackTrace=   at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata&#44; DbSyncSession syncSession&#44; SyncSessionStatistics sessionStatistics)
       at Microsoft.Synchronization.Data.RelationalSyncProvider.ProcessChangeBatch(ConflictResolutionPolicy resolutionPolicy&#44; ChangeBatch sourceChanges&#44; Object changeDataRetriever&#44; SyncCallbacks syncCallbacks&#44; SyncSessionStatistics sessionStatistics)
       at Microsoft.Synchronization.KnowledgeProviderProxy.ProcessChangeBatch(CONFLICT_RESOLUTION_POLICY resolutionPolicy&#44; ISyncChangeBatch pSourceChangeManager&#44; Object pUnkDataRetriever&#44; ISyncCallback pCallback&#44; _SYNC_SESSION_STATISTICS&amp; pSyncSessionStatistics)
       at Microsoft.Synchronization.CoreInterop.ISyncSession.Start(CONFLICT_RESOLUTION_POLICY resolutionPolicy&#44; _SYNC_SESSION_STATISTICS&amp; pSyncSessionStatistics)
       at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWaySyncHelper(SyncIdFormatGroup sourceIdFormats&#44; SyncIdFormatGroup destinationIdFormats&#44; KnowledgeSyncProviderConfiguration destinationConfiguration&#44; SyncCallbacks DestinationCallbacks&#44; ISyncProvider sourceProxy&#44; ISyncProvider destinationProxy&#44; ChangeDataAdapter callbackChangeDataAdapter&#44; SyncDataConverter conflictDataConverter&#44; Int32&amp; changesApplied&#44; Int32&amp; changesFailed)
       at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWayKnowledgeSync(SyncDataConverter sourceConverter&#44; SyncDataConverter destinationConverter&#44; SyncProvider sourceProvider&#44; SyncProvider destinationProvider&#44; Int32&amp; changesApplied&#44; Int32&amp; changesFailed)
       at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize()
       at Microsoft.Synchronization.SyncOrchestrator.Synchronize()
       at Microsoft.SqlAzureDataSync.SyncControllerLib.SyncController.SyncInternal(SyncJobParams syncJobParams&#44; SyncDirectionOrder syncDirectionOrder&#44; String batchingDirectoryPath)
       at Microsoft.SqlAzureDataSync.LocalAgentHost.LocalAgentSyncControllerService.ExecuteSync(SyncJobParams syncJobParams&#44; SyncDirectionOrder syncDirection&#44; String batchingDir&#44; Func`4 syncInternal)
       at Microsoft.SqlAzureDataSync.SyncControllerLib.SyncController.Sync(String scopeName&#44; SyncJobParams syncJobParams&#44; SyncDirectionOrder syncDirection&#44; String batchingDir)
       at Microsoft.SqlAzureDataSync.AgentHostLib.ClientRequestHandler.Sync(SyncRequest request&#44; String batchingDir),'
    'Type=Microsoft.Synchronization.Data.DbSyncException,Message=Cannot read scope information for scope 40DC3E7F-8C89-4EC1-98B6-D43E2040725D. Ensure that the scope name is correct and the SqlCeSyncScopeProvisioning.Apply method was called to create that scope.,Source=Microsoft.Synchronization.Data.SqlServer,StackTrace=   at Microsoft.Synchronization.Data.SqlServer.SqlSyncScopeHandler.ReadScope(IDbConnection connection&#44; IDbTransaction transaction&#44; ReadKnowledgeType readType&#44; ReadForgottenKnowledgeType readFKtype)
       at Microsoft.Synchronization.Data.SyncScopeHandlerBase.ReadScope(IDbConnection connection&#44; ReadKnowledgeType readType)
       at Microsoft.Synchronization.Data.RelationalSyncProvider.MultiTransactionApplyChangesAdapter.CommitChanges(ChangeApplicationAction&amp; continueAction)
       at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata&#44; DbSyncSession syncSession&#44; SyncSessionStatistics sessionStatistics),', eType:'Type=Microsoft.Synchronization.Data.DbSyncException', eMessage:'Message=An unexpected error occurred when applying batch file C:\Users\BTFTES~1\AppData\Local\Temp\DSS_syncjobst34lfqtryhojv5kgyzvrz4b\af958975-9cb5-4597-8857-d983087a7286.batch. See the inner exception for more details.', eTypeInner:'Type=Microsoft.Synchronization.Data.DbSyncException', eMessageInner:'Message=Cannot read scope information for scope 40DC3E7F-8C89-4EC1-98B6-D43E2040725D. Ensure that the scope name is correct and the SqlCeSyncScopeProvisioning.Apply method was called to create that scope.'

    These files are no longer on the system, so I don't have a way to review them. Any thoughts?

    Friday, May 4, 2012 11:36 PM
  • have you enabled verbose logging?

    the files  mentioned in the log are batch files that are used to send the updates in batches and the service automatically cleans them up.

    this one

    "Cannot read scope information for scope 40DC3E7F-8C89-4EC1-98B6-D43E2040725D. Ensure that the scope name is correct and the SqlCeSyncScopeProvisioning.Apply method was called to create that scope."

    indicates a potential problem in the provisioning.

    can you check the scope_info table if there is a scope with this name: 40DC3E7F-8C89-4EC1-98B6-D43E2040725D

    the error is also referring to SqlCeSyncScopeProvisioning, the Data Sync Service doesnt support SQL Ce

    Friday, May 4, 2012 11:48 PM
    Answerer
  • We haven't yet enabled verbose logging. We have a new DB trying to sync from scratch and don't want to interrupt the process in case we just had bad luck on the first try and this one will work.

    I checked the scope_info_dss table from the original on-prem DB and it didn't have a scope with the name 40DC3E7F-8C89-4EC1-98B6-D43E2040725D.

    We haven't done anything with SQL CE on this machine, so I'm not sure where that's coming from. The DB server has a vanilla install of SQL 2012 Enterprise and nothing else, as far as I know.

    Saturday, May 5, 2012 5:00 AM
  • The new DB sync failed as well with the same errors. It also seems like it failed at the same point in the sync process since the same subset data had been imported prior to the errors. After enabling the verbose logging and trying again, it looks like there's no additional info from what's put into the event log. We're not using any unsupported types and there are no conflicting constraints, but the fact that the sync always stops at the same point seems to indicate there might be an issue with the data itself. However, if the issue is actually with the provisioning, is the best solution to deprovision everything (and delete the sync group) and then set everything up again from scratch?
    Monday, May 7, 2012 5:17 AM
  • look for a file DSSAgentOutput*.log. in the same folder where you installed the Sync Agent.

    when configuring a new sync group, make sure only one database has a copy of the data.  for example, if you have an Azure Hub and an On-Premise DB, make sure either only the Hub is prepopulated with data or  the on-premise DB. not both. if you preload both databases with data, that will cause conflicts since the service has no idea that the data on both databases are one and the same.

    yes, you might want to deprovision the sync group and recreate it. again, make sure only one DB has a copy of the data.

    Monday, May 7, 2012 5:40 AM
    Answerer
  • Ed,

      Are you syncing all columns for all tables? 

    Tuesday, May 8, 2012 5:04 PM
  • After taking a closer look, we suspect the sync is always failing at the same place because a FK constraint is being violated on the client-side insert. 200 of the 35K rows of the foreign table are missing in the hub DB, and the 1000th row referencing missing data is over 2 million records in. We have no idea what happened to this data or where it was lost, but it's definitely not in the bacpac we used to import, so it must have been during that export process. It also means that all the DBs we've created from that bacpac are in a somewhat broken state. I guess we had incorrectly assumed that bacpac import failures would raise an error, which does not seem to be the case. Given that we've burned hours re-uploading and re-importing bacpacs due to SQL Azure throwing a fit over a legacy sproc referencing a since renamed column, I thought it was safe to assume that the fundamental integrity of the DB was on the checklist. Lesson learned.

    Tuesday, May 8, 2012 7:18 PM
  • After reviewing the original source DB (the one we exported the bacpac from) it looks like the export did not operate as expected. For the foreign table, we use an identity column as its primary key, which is fine for our sync scenario since we rarely add rows and can handle changes all in one place. However, when importing that table it regenerated the primary keys, so if the original key order were 1,2,5,7 it was now 1,2,3,4. Unfortunately, it didn't update the foreign key references, which still point to 1,2,5,7. The data is all there, but the references are broken and we need to revert to the original data to recover.

    This issue is unrelated to Data Sync, but is a huge gotcha and, in our case, presented as a symptom in sync.

    • Proposed as answer by Han, MSFT Wednesday, May 9, 2012 1:45 AM
    • Marked as answer by Ed Kaim Wednesday, May 9, 2012 1:48 AM
    Tuesday, May 8, 2012 8:12 PM