locked
Synchronizing Legacy DB and Application DB RRS feed

  • Question

  • Hi All,

            We are developing a solution which requires consulting some information on legacy systems. There is also a requirement to keep the system available even if the legacy systems are out. We are thinking about replicating the information on our application DB as a contingency (it is acceptable if the data we have to use does not change often).

            My questions are: Have anyone used a different solution for a simmilar requirement? If not, what would be the best way to do this

             1) Implement a windows service that update our DB copy from time to time (the problem here is that our application is running on a cluster, so we need to control the concurrency between the services running on each cluster machine)

            2) Use some mechanism from our DB to make this synchronization (we are using SQL Server as our DB server). But I don't know if there is such a mechanism (please let me know if there is one)

          

    Thanks and best regards,


    Sergio

    Monday, February 5, 2007 3:13 PM

Answers

  • Hi Sergio,

    I believe that your solution of replicating the data from the legacy system to a new relational database is a good approach.  With SQL Server your best option for this kind of architecture is to use SQL Server Integration Services (http://msdn2.microsoft.com/en-us/sql/aa336312.aspx).

    Graham

    Monday, February 5, 2007 4:33 PM
  • From your question, I understood your primary choice is to use legacy DB. And "application DB" is your back-up. In such scenarios, the good choice is to use application-DB proxy for legacy db. I mean, your application could construct such a chain-of-responsibility and proxy like structures which can switch the DB on-the-fly. You can construct an application logic such as trying to extract data from legacy if it fails then check on application db on read cycles. And, try to insert on both parties by firing just one command on write cycles. But be sure to keep log on operations and use transactions on both reading and writing. Such an approach might help you to run reliable.

    The last but the most important thing is to keep in mind that such transportation tasks (r/w from some data source) usually introduces so many trade-offs. For instance, you'd probably have comm and resource mngmt costs on the solution above. But, hey, you've to choose an end on trade-off (performance vs. reliability)

    Ekrem Aksoy

    eaxthearchitecture.blogspot.com

    Tuesday, February 6, 2007 2:42 PM
  • Sergio,

    I would suggest to you SSIS(Sql server integration service) for replication between the legacy db and your application db. It has lots of inbuilt capabilities for workflow management and ETL process.

     

    Tuesday, February 6, 2007 5:48 PM
  • Hi Sergio,

     

    I've used both technologies, and I can say that they're essentially the same. However, SSIS is the enhanced and extended version of DTS. So you can consider to use either of them.

    Regards,

    eax

    eaxthearchitecture.blogspot.com

    Friday, February 9, 2007 11:11 AM

All replies

  • Hi Sergio,

    I believe that your solution of replicating the data from the legacy system to a new relational database is a good approach.  With SQL Server your best option for this kind of architecture is to use SQL Server Integration Services (http://msdn2.microsoft.com/en-us/sql/aa336312.aspx).

    Graham

    Monday, February 5, 2007 4:33 PM
  • From your question, I understood your primary choice is to use legacy DB. And "application DB" is your back-up. In such scenarios, the good choice is to use application-DB proxy for legacy db. I mean, your application could construct such a chain-of-responsibility and proxy like structures which can switch the DB on-the-fly. You can construct an application logic such as trying to extract data from legacy if it fails then check on application db on read cycles. And, try to insert on both parties by firing just one command on write cycles. But be sure to keep log on operations and use transactions on both reading and writing. Such an approach might help you to run reliable.

    The last but the most important thing is to keep in mind that such transportation tasks (r/w from some data source) usually introduces so many trade-offs. For instance, you'd probably have comm and resource mngmt costs on the solution above. But, hey, you've to choose an end on trade-off (performance vs. reliability)

    Ekrem Aksoy

    eaxthearchitecture.blogspot.com

    Tuesday, February 6, 2007 2:42 PM
  • Sergio,

    I would suggest to you SSIS(Sql server integration service) for replication between the legacy db and your application db. It has lots of inbuilt capabilities for workflow management and ETL process.

     

    Tuesday, February 6, 2007 5:48 PM
  • Hi Sudheer, eax and Graham

                          Thanks for the suggestions! I'm strongly thinking about using the suggested SSIS in order to make the integration/replication, but we need to check if the customer will use SQL Server 2005.

                           Bith Standard and Enterprise edition supports SSIS, correct? In case we cannot use SQL Server 2005 I've  found a related technology on SQL Server 2000 (I don't remember the name right now, but I think is something like DTS (Data Transformation Service)).

                          Could you please tell me the pros/cons about using each one of these technologies?

     

    Thanks!

     

    Sergio

     

    Thursday, February 8, 2007 4:07 PM
  • Hi Sergio,

     

    I've used both technologies, and I can say that they're essentially the same. However, SSIS is the enhanced and extended version of DTS. So you can consider to use either of them.

    Regards,

    eax

    eaxthearchitecture.blogspot.com

    Friday, February 9, 2007 11:11 AM
  • It usually goes like this:

    - set up an FTP server on the legacy system, or some other system that can give/get file system access to/from the legacy system

    - legacy system exports data to a uniquely named (text) file (usually with a timestamp in its name), and puts it in a predefined directory - this happens every X(X) minutes/hours/days

    - get the exported data via FTP

    - do the bulk insert into a temporary table (to do the bulk insert, the target SQL server has to have the access to the exported data file you've FTP-ed)

    - do your checkings and insert into the real table

    - you can use DTS (MSSQL 2000) or SSIS (MSSQL 2005) on the target system - you don't have to, since it all can be done via T-SQL, but it simplifies things (simple flow)

     

    Consider MS BizTalk server or MS Host Integration Server as well.

    Tuesday, February 13, 2007 11:24 AM
  • great answer
    Tuesday, February 13, 2007 2:23 PM
  •  

     

    SQL 2005 Mobile Replication Components with replica's

    Tuesday, February 13, 2007 6:27 PM