How to achieve data redunancy across data centers? RRS feed

  • Question

  • We have two db servers running in different datacenters connected through 2 100Mb/s data links. Each db is connected to 12 application servers. The total transaction per day by each application server is between 150K to 200K. Our requirement is to have all application server write to single db on each location. Then these databases are merged into one using replication. So at any given instant db on either datacenter will contain not only entries from its native application server but also from other datacenter. We tried using peer-to-peer replication but it caused severe blocking and few of our application servers crashed. We have to sync each table manually using after production in case we stop distribution agent during production. Is there any easy way to achieve above requirement within SQLServer 2005? Can we eliminate blocking in replication(don't tell me to use snapshot isolation or reducing commitbatchthreshold / commitbatchsize, we have already tried it; blocking is reduce but not eliminated)?

    Friday, May 2, 2008 10:20 PM

All replies

  • I'm not sure I understand your description of the scenario. When you say 'db' you mean a database or a SQL Server instance?

    1) You have 24 applications servers (spread across 2 instances), each writing to its own database, and then these 24 databases are consolidated into a single one (on one of the instances).

    2) You have 12x2 applications servers (spread across 2 instances), each writing to its own database and then each of the 12 databases from one instance is consolidated with it's counterpart database from the other instance.

    3) You have 24 application servers (spread across 2 instances), each writing to its own database, then each of these 24 databases is consolidated with the other 23.

    4) You have 2 databases on two instances and these two databases are peer-to-peer consolidated into each other. Each instance is accessed by 12 application servers writing into the local instance database. (I think this is the scenario you're most likely describing)


    What is the problem your deployment is trying to solve? This does not look like a typical high availability scenario, nor a disaster recoverability, at most it solves some data consolidation problems.


    Database Mirroring (which this forum is about) is not going to solve your problem, it is not a data consolidation feature but a availability/recoverability feature.


    Are you looking for specifically a replication driven solution? How many tables are we talking about? Do you need the conflict resolution, identity partitioning and other replication features? I've seen similar scenarios addressed succesfully by deploying Service Broker to propagate the updates between sites, but that involves some (significant ammounts of) code to be written. Look at my blog entry on http://rusanu.com/2007/04/25/reusing-conversations/ for how to use conversations to eliminate contention, but you still gonna need the logic to pack changes into messages (XML) and then shred messages into tables on the other site.

    But to be frank I'm a bit surprised to hear that 2 databases cannot keep up merge replication at a rate of more than 4 seconds between transactions (200k/day) and perhaps is worth giving that one more shot and tunning it out with help from the replication experts.

    Saturday, May 3, 2008 5:34 AM
  • I think I fail to communicate scenario,

    My scenario is 4th one 2 database on 2 different instances. 12 application servers writing to each. Peer-to-peer replication does keeps with data but database are experiencing blocking due to distributor agents. Is there any quick fix to reducing blocking?

    Thursday, May 8, 2008 5:42 PM