locked
transaction log size growing very large RRS feed

  • Question

  • hi all,

    my tlogs at the subscriber are growing very large

    irregardless of my recovery mode. any help

    using snapshot-push replication

    thanks,

    joey

     

     

    Tuesday, October 10, 2006 4:34 PM

Answers

  • If you want data to be bulk-loaded in small batches while the snapshot is being applied to the subscriber, you can:

    1) Specify the unofficial /DisableResumableSnapshot parameter on the distribution agent command line, this will prevent each bcp file from being applied in an atomic transaction

    2) Specify a smaller batch size via the /BcpBatchSize parameter

    Alternately, you can try to get the benefit of minimal logging by taking advantage of partition switching support as follows:

    1) Deliver snapshot to an empty staging table

    2) Switch the staging table into the "real" archiving table in the post-snapshot script

    I haven't tried this approach myself (should probably do so shortly) but it should theoretically work.

    Hope that helps,

    -Raymond

    Monday, October 16, 2006 6:07 PM

All replies

  • i doubt this is replication related.  check for open transactions.
    Friday, October 13, 2006 3:59 AM
  • Several reasons are outlined in http://support.microsoft.com/kb/317375 I agree that this is not a replication issue
    Friday, October 13, 2006 10:54 AM
  • SQL 2000

    I disagree. I have the same issue on all of my 10 ten subscribers. I run a "backup log [dbname] with truncate_only" every 12 hours to keep the log manageable.

    Saturday, October 14, 2006 2:00 AM
  • Can you run your SQL Profiler during the time of the replication so you can see what's happening in your transaction logs? What are your database configuration options?
    Saturday, October 14, 2006 9:28 AM
  • hi all,

    this is very much replication related. anyway its  not

    only tlogs that is growing. my data files are growing as well

    since i turned 'delete record that match the row filter option on"

    i am replicating to datamart and dealing with large volume of data

    every end of the week.

     

    i think i can mimimize this the size of huge data being replicated

    get in to the datamart in small batches. let say in batch of hundreds or thousands. but i think there is no way to control that in replication

     

    ------------------------------------------------------------------------

    looks like a new title [mvp]. congrats!

    regards

    joey

     

     

     

    Saturday, October 14, 2006 5:09 PM
  • If you want data to be bulk-loaded in small batches while the snapshot is being applied to the subscriber, you can:

    1) Specify the unofficial /DisableResumableSnapshot parameter on the distribution agent command line, this will prevent each bcp file from being applied in an atomic transaction

    2) Specify a smaller batch size via the /BcpBatchSize parameter

    Alternately, you can try to get the benefit of minimal logging by taking advantage of partition switching support as follows:

    1) Deliver snapshot to an empty staging table

    2) Switch the staging table into the "real" archiving table in the post-snapshot script

    I haven't tried this approach myself (should probably do so shortly) but it should theoretically work.

    Hope that helps,

    -Raymond

    Monday, October 16, 2006 6:07 PM
  • One note, the /DisableResumableSnapshot parameter is unsupported by Microsoft so you have to cross your fingers if something goes wrong

    Yo joeydj, thanks. Napansin ko lang recently.  But this is for a different category, not SQL Server Sad

    Tuesday, October 17, 2006 1:06 PM