locked
Best way to synchronize data between workstations. RRS feed

  • Question

  • I'm not sure if this is the right forum to ask this question on, so if it should be on a different one, please let me know.

    We have an application that connects to a SQL server 2005 database when it queries data, does inserts or updates data.  In the event that the connection between the workstations (where the application resides) and the  SQL Server fails when an insert/update or query operation is executed, we want to have a failover where it will look to the local database to perform the operation.  On the workstations (about 30 of them), we will have a service running that if it detects data in the local DB it will try to connect to the SQL Server and move those transactions to the server.  This, I think will work fine if the main SQL Server is down for a brief moment or the network hiccups, then re-establishes connection. 

    The bigger issue is if the main SQL server is down for a long period of time or communication to it is lost for longer period of time.  If that happens, what we are looking at doing is synchronizing the data between the workstations.  The reason we need to do this is the following. If a user clocks into a job, Job#123, on workstation A, then later goes to another workstation, B, to clock out of the same job,  workstation B needs to know about Job#123.  This is a labor tracking application if that helps.  What I need help with is the best way to design this so that there is minimal downtime and the users aren't concerned where the data is being stored, only that the application is up and running all the time and its working properly.

    I have never done anything like this so I'm not sure what the best way to do this is.  Any help or suggestions you can give me would be greatly appreciated.

    Regards,

    Kevin

    Wednesday, April 10, 2013 5:16 PM

Answers

  • Hello Kevin,

    I think you should be looking at Database mirroring in this case. Instead of having the local db's on 30 workstations and synchronizing them, you can set up database mirroring and when the main SQL server(principal) is lost or down for a while, your second sql  server (mirror) will bring the database available and the users can continue to do their stuff without any problems. Additionally, the application can failover automatically(no additional steps to force failover). To use this feature, configure you database mirroring in high safety mode.. refer this http://msdn.microsoft.com/en-us/library/dd207006.aspx

    also, google for 'database mirroring' to get some more videos on how to do this...


    Hope it Helps!!

    • Proposed as answer by Uri DimantMVP Thursday, April 11, 2013 4:45 AM
    • Marked as answer by Fanny Liu Tuesday, April 16, 2013 10:04 AM
    Wednesday, April 10, 2013 11:41 PM
  • So it really isn't the right forum, but I don't know where would be best to move it. I am thinking high availability is the place to go. Just wondering before I move it, why 2005?

    And probably bigger, why are you concerned about connectivity? Would it be a norm that there is a disconnect between the devices? To me, I would suggest you architect the client to save its data locally by default, and then have the background process move the data to the primary server. So instead of thinking of it as a service to detect the connection issue, have the service always do the saving of the data, so you only have one way you are supporting. It will be a bit more complex in the "normal" case, but testing will be a lot easier, and the offline solution is handled. Use the same service to fetch data from the server. I would personally work to ensure the primary server is highly available, to avoid syncing the clients to one another on a failure of the main server (connectivity is perhaps a very real concern, but that would disallow syncing clients too, probably)

    I think that you can use Merge Replication to do some of this like you want, or you might check out the sync framework http://msdn.microsoft.com/en-us/library/bb902827.aspx. I haven't tried either personally (this is the database design forum :) but I have seen Merge work in days past..


    Louis

    Without good requirements, my advice is only guesses. Please don't hold it against me if my answer answers my interpretation of your questions.

    • Marked as answer by Fanny Liu Tuesday, April 16, 2013 10:04 AM
    Thursday, April 11, 2013 5:17 AM

All replies

  • Hello Kevin,

    I think you should be looking at Database mirroring in this case. Instead of having the local db's on 30 workstations and synchronizing them, you can set up database mirroring and when the main SQL server(principal) is lost or down for a while, your second sql  server (mirror) will bring the database available and the users can continue to do their stuff without any problems. Additionally, the application can failover automatically(no additional steps to force failover). To use this feature, configure you database mirroring in high safety mode.. refer this http://msdn.microsoft.com/en-us/library/dd207006.aspx

    also, google for 'database mirroring' to get some more videos on how to do this...


    Hope it Helps!!

    • Proposed as answer by Uri DimantMVP Thursday, April 11, 2013 4:45 AM
    • Marked as answer by Fanny Liu Tuesday, April 16, 2013 10:04 AM
    Wednesday, April 10, 2013 11:41 PM
  • Hi PKGMedina,

    A Design where the client machines act as a local backup with the requirement for peer-to-peer synchronization is a very complex and problematic solution to pull off. Instead think about designing your system so that the main SQL Server Database is Highly Available (HA). You can so this through a number of ways.

    SQL Server Clustering - This will help you in cases where the SQL Server Service becomes unavailable on one SQL Server Instance. The secondary node should then take control as the primary. In this setup you would have 2 instances of SQL Server with shared storage on a SAN (Storage Area Network).

    SQL Server Mirroring - This will help in the same case if one server or instance is unavailable. Basically the primary Node in the mirror group is constantly pushing its transaction data over to the secondary node in the mirror in basically real time.

    I hope this gets you on the right track.

    Thursday, April 11, 2013 3:21 AM
  • So it really isn't the right forum, but I don't know where would be best to move it. I am thinking high availability is the place to go. Just wondering before I move it, why 2005?

    And probably bigger, why are you concerned about connectivity? Would it be a norm that there is a disconnect between the devices? To me, I would suggest you architect the client to save its data locally by default, and then have the background process move the data to the primary server. So instead of thinking of it as a service to detect the connection issue, have the service always do the saving of the data, so you only have one way you are supporting. It will be a bit more complex in the "normal" case, but testing will be a lot easier, and the offline solution is handled. Use the same service to fetch data from the server. I would personally work to ensure the primary server is highly available, to avoid syncing the clients to one another on a failure of the main server (connectivity is perhaps a very real concern, but that would disallow syncing clients too, probably)

    I think that you can use Merge Replication to do some of this like you want, or you might check out the sync framework http://msdn.microsoft.com/en-us/library/bb902827.aspx. I haven't tried either personally (this is the database design forum :) but I have seen Merge work in days past..


    Louis

    Without good requirements, my advice is only guesses. Please don't hold it against me if my answer answers my interpretation of your questions.

    • Marked as answer by Fanny Liu Tuesday, April 16, 2013 10:04 AM
    Thursday, April 11, 2013 5:17 AM