Database too big RRS feed

  • Question

  • I have a 750GB database running SQL Server 2008R2. It has only one mdf file, and it is taking more than 24hrs to backup.

    Is there a way of Splitting the datafile according to tables, and do single table backups as most tables are static?

    Friday, January 4, 2013 8:27 AM


All replies

  • Hi Kevin,

    You may place the frequently modified tables on separate filegroups. Then you can selectively backup the filegroups as explained here.

    This would reduce the backup time.

    Pradeep Adiga
    Twitter: @PradeepAdiga

    Friday, January 4, 2013 10:04 AM
  • Kevin,

    While Pradeep is correct that it'll reduce the backup time it would only be for the individual file/filegroup. You'd still need to keep the log backups from the oldest differential filegroup or oldest file/filegroup to bring the database to a consistent state. This will make your restores much longer and making your backup cycle more complex (and restore!). Be prepared to test this and make sure you have it correct BEFORE a disaster happens. It'll make your data retention period be much longer and more data needing to be kept. If the backup device is this slow, it may take much more time to request the files needed to restore the database than to actually restore it! What is your RPO and RTO for this database/server?

    750 GB in 24 hours = 500 MB/Minute

    That's not very good in terms of throughput. While I would still break apart a 750GB database, depending on area, I would also look into why the throughput of the backup device is this slow. If it's that slow to backup, it will probably be that slow on a restore and you could blow your SLA.


    Sean Gallardy | Blog | Twitter

    Friday, January 4, 2013 3:57 PM
  • I'll second Sean, I have backed up larger databases faster.  Are you by any chance backing up over a slow/busy network connection?

    Are you using COMPRESSION?

    Mohit K. Gupta --- MCTS, MCITP, MCC2011
    My Blog OpsVault My MSDN Blog Twitter @SQLCAN

    Friday, January 4, 2013 8:41 PM
  • Hi,

    If your using SQL server 2008 R2 or above edition then try Backup with compression and if possible try backup during non-peak hours Inaddition check if any jobs are running parallel to your backup or not.

    Backup with compression will more than half way solve your problem.

    Regards, Ashish ----------------------------------------------------------- Please mark answered if I've answered your question and vote for it as helpful to help other user's find a solution quicker..

    • Proposed as answer by Ashish_DbaSQL Saturday, January 5, 2013 2:42 PM
    Saturday, January 5, 2013 2:42 PM
  • and it is taking more than 24hrs to backup.

    Hello Kevin,

    I agree with Sean & Mohit, that's a very slow throughput. Do you backup on tape or on disc? You should check the responsible components light network connector, switch, tape etc.

    If you backup to disc, then may a "striped backup" could help you to speed up backup, means to backup to several backup files in parallel to increase the I/O throughput. See TechNet ScriptCenter: Striped Backup of Sql Server Databases

    Olaf Helper

    Blog Xing

    Sunday, January 6, 2013 7:56 AM
  • Hi,

    Yes ,It is possible to split database into different data files and datafiles can be mnaged by using file grioups, you can

    backup only selected file groups data, file groups could also be stored on diffferent physical disk enabling faster backup and access.

    For better performance you can construct databse using SAN.

    For better backup performance you ca plan to use RDX backup drives.

    Please help and appreciate others by using these features: "Propose As Answer", "Vote As Helpful" and "Mark As Answer"


    Twitter: @waqas8777
    Linked In:

    Monday, January 7, 2013 12:07 PM
  • you can use redgate light speed 3rd party tool to speed up the backup and restore process

    Ramesh Babu Vavilla MCTS,MSBI

    Tuesday, January 8, 2013 5:55 PM