Sync optimization for transactional replication RRS feed

  • Question

  • Looking for ways to optimize what I just inherited. 

    Got a transactional replication database (3.5TB and growing) that has two subscribers. Not all data is replicated, perhaps 1TB to one subscriber, and a few hundreg GB to the other subscriber. No filtering in use.

    Once a month, I have to tear down/recreate the Integration and UAT environments, each of which include this 3.5TB database, and process of creating snapshots and distributing the data is quite slow. 

    Network throughput between the servers is not great, and I'm trying to find the fastest way to get replication up and running. 

    Initializing via a SQL backup would require 3.5TB to restore, and even with backup compression, not sure how large the BAK would be, or how long it would take to copy. TDE is used, but it's SQL 2016, so backup compression should work with TDE.

    Wondering if compression on the subscriber side might help. I looked at snapshot compression, but it seems archaic, referencing the fact that if files are > 2GB, it's basically ineffective, if I understood it correctly. 

    There are a total of 19 publications. A couple of the snapshots are ~250GB, others range from 18GB to 95GB.

    Thanks in advance

    Ned Otter MCSE Data Platform

    Tuesday, May 21, 2019 10:38 PM

All replies

  • Hi,

    Yes you can pre-seed the data with a backup instead of the snapshot that SQL takes if you do this through SSMS but you should check compatibility first. This link explains the process

    A .bak with compression is typically around 1/5 the size of the database so it would be worth shrinking the log too if possible before starting that as the backup will include all data in the LDF too.

    I'm not that familiar with replication to know what it does when it takes the snapshot (i.e. the size of it) but a compressed bak file is generally 1/5 of the DB size from experience.

    I've also never dealt with compression on a subscriber before but I'm sure others here will have :-)

    Hope that helps :-)



    Sunday, May 26, 2019 6:33 PM