locked
Transaction Logs Management RRS feed

  • Question

  • Hi,

    So in Oracle we have redo logs which are in groups and Oracle writes to group 1 to last available group and before overwriting the group 1, it makes sure the group 1 data has been archived [If the DB is in archive mode]. It's all done automatically by Oracle, once the archivelog mode is set.

    But in SQL server, I found that there is only 1 transaction log and it works pretty much the same way, it will go towards the end of the log and then start overwriting the beginning section when its full. But if the FULL recovery mode is set then DBA has to make sure that the log data has been backed up before it gets overwritten. And if DBA fail to backup then instance will hang with 9002 error. 

    My question is that In oracle, we dont have to worry about it but here we need to find a frequent enough time to backup log to avoid hang situation. How can one find out how often my logs are getting overwritten [Log switched in oracle] and how frequently should I schedule the log backup? Should I just keep running "Backup log" every hour to have it covered? What is the best practice here? How can we assume the DB traffic to be same all the time, means what if there is this 1 crazy transactions which generates a lot of transaction logs and fill the whole log before my next schedule log backup? How to avoid that? In oracle, its all handled automatically the redo will be archived and managed by Oracle when that event occured, in SQL server its time based and how we can avoid letting a big transaction hang the entire instance?

    Thanks


     

    Friday, September 22, 2017 3:00 PM

Answers

  • The 9002 error would only occur if the log file needs to grow and cannot, either because the disk is full or you have restricted the size of the log file.

    The only reason to run a database in "full" recovery mode is if you need "point in time recovery", and are doing regular log backups.  If you don't, then "simple" mode is what you need.

    Please see:

    https://technet.microsoft.com/en-us/library/ms175987%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396

    Friday, September 22, 2017 3:49 PM
    Answerer
  • The log in SQL Server in full recovery is only overwritten after you do a backup.  So the size of the "used" space before the backup would be the size of the log file. 

    Please see:

    https://www.mssqltips.com/sqlservertip/1225/how-to-determine-sql-server-database-transaction-log-usage/

    Friday, September 22, 2017 5:54 PM
    Answerer
  • As Tom mentioned, things are just different in the SQL Server world. We decide the acceptable recovery point objective (how much data can we afford to lose) and do log backup based on that. And the ldf file(s) for each database, just need to be the size they need to be in order to accommodate the log records that are being built-up during that time period.

    An LDF file is divided into Viirtual Log File, and such a VLF is either used or free. Think of SQL Server going through the VLFs using and freeing (auto in simple or when you do log backup in full) VLF as time goes by.

    There is *some* insight into the log, but not exactly what you are asking for.

    DBCC SQLPERF(logspace): How "full" is the log. Or to be more precise the percentage  in size of how many VLFs are in use.

    DBCC LOGINFO: List each VLF and tell you whether it is in use or free to overwrite. This is the one I recommend that you spend time with.

    fn_dblog: Allow you to peek at the log records. In general not very useful.


    Tibor Karaszi, SQL Server MVP (Web Blog)

    Monday, September 25, 2017 6:56 AM

All replies

  • The 9002 error would only occur if the log file needs to grow and cannot, either because the disk is full or you have restricted the size of the log file.

    The only reason to run a database in "full" recovery mode is if you need "point in time recovery", and are doing regular log backups.  If you don't, then "simple" mode is what you need.

    Please see:

    https://technet.microsoft.com/en-us/library/ms175987%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396

    Friday, September 22, 2017 3:49 PM
    Answerer
  • Thanks for the reply.

    With simple recovery, I wont be able to recover upto the time of failure, I can recover only upto the last good backup. so, if the backup was taken at 2am and my DB crashed in 4pm then I will lose all day worth of data. That's why I want FULL recovery mode and then I need to backup the transaction logs. Am I missing anything here?

    Friday, September 22, 2017 3:59 PM
  • If you need to recover to a point in time, you need full recovery.  You also need to do log backups often enough so the log file does not fill up the hard drive.

    You cannot avoid log file usage.  It works the same as Oracle.  Large transactions can use a lot of log space and you need to have enough disk space to allow for that between log backups.  In some cases, you may need to do log backups every 15 minutes.   If that is what it takes to keep the disk space usage to a minimum.  

    Friday, September 22, 2017 5:05 PM
    Answerer
  • And how can you verify/check how much trasaction log data has been generated in a given time. For example, how can i check how much log data was generated yesterday and how many times the log actually switched?

    Thanks

    Friday, September 22, 2017 5:23 PM
  • The log file physical size is the best way to determine the size.  There really is no other method. 

    The log file will never physically get smaller, unless you manually shrink it.  So the size is the largest it needs to be.

    Friday, September 22, 2017 5:34 PM
    Answerer
  • If I have a 1 GB of log file and if my database generated 4 GB of log then it would have gotten over written 4 times, without increasing the size. If I check the size of file, it will be 1GB only but that's not the correct size of log data which my DB generated. How can I find out how many times my log file has been overwritten? That will give me clear picture of actual log data size my DB is generating. 

    These are common Jr DBA level maintenance in Oracle world, please don't let me down by saying there is no way to achieve this at all in sql server. I will start "not liking it" even more.

    Thanks

    Friday, September 22, 2017 5:48 PM
  • The log in SQL Server in full recovery is only overwritten after you do a backup.  So the size of the "used" space before the backup would be the size of the log file. 

    Please see:

    https://www.mssqltips.com/sqlservertip/1225/how-to-determine-sql-server-database-transaction-log-usage/

    Friday, September 22, 2017 5:54 PM
    Answerer
  • Why it's so locked down? Am not fully convinced but I guess that's where Microsoft chime in, nothing is fully disclosed. I guess we can survive with this info but wont know exactly how internally its working. 

    In Oracle, by using the size of archivelogs, I can tell how much data has been generated on a given day [or last whole week] by using the archive/transaction log sizes. I can tell which days are most busy for my application because it generated lot of archivelogs/tansactions on that day. How can we do that in SQL Server?

    Thanks

    Friday, September 22, 2017 6:02 PM
  • SQL Server logs do not work that way.  You can't use the log file to determine activity.  
    Friday, September 22, 2017 6:12 PM
    Answerer
  • As Tom mentioned, things are just different in the SQL Server world. We decide the acceptable recovery point objective (how much data can we afford to lose) and do log backup based on that. And the ldf file(s) for each database, just need to be the size they need to be in order to accommodate the log records that are being built-up during that time period.

    An LDF file is divided into Viirtual Log File, and such a VLF is either used or free. Think of SQL Server going through the VLFs using and freeing (auto in simple or when you do log backup in full) VLF as time goes by.

    There is *some* insight into the log, but not exactly what you are asking for.

    DBCC SQLPERF(logspace): How "full" is the log. Or to be more precise the percentage  in size of how many VLFs are in use.

    DBCC LOGINFO: List each VLF and tell you whether it is in use or free to overwrite. This is the one I recommend that you spend time with.

    fn_dblog: Allow you to peek at the log records. In general not very useful.


    Tibor Karaszi, SQL Server MVP (Web Blog)

    Monday, September 25, 2017 6:56 AM