Answered by:
Trimming trans log (sql 2008)

Question
-
We have a sql 2008 DB backed up in Simple mode. The data (mdf) file has grown to about 500GB and the log (ldf) file has grown to about 250GB. Other than changing the Recovery Mode to Full, and telling the Dev to keep the transactions as short as possible, what else can I do administratively to keep the log file size down?
TIA,
edm2
Monday, March 10, 2014 3:11 PM
Answers
-
1.You should surely look at Autogrowth of your log file.I guess it must be set to some 20-30 %.It can be 100-200 MB( tentative value to start)
2. Other thing you should look at open transaction causing log from truncating.Long running transactions can hold log from truncating.
DBCC opentran(dn_name)
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
- Edited by Shanky_621MVP Thursday, March 13, 2014 11:07 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 3:21 PM -
1. Good point. I had forgotten about that but a check showed the Autogrowth was already set to 1GB.
2. From the perspective of trans log size, is it better to set a large DB to Full Recovery mode and take hourly trans dump (to keep the trans log size small) or keep the DB in Simple Recovery mode? (which supposedly has minimal impact on log growth)
I guess 1 G is large to set for log file.I suggest you check this value ,I am sure you would not require 1 G autogrowth .
Reg second question.It depends on whether you want point in time(PIT) recovery or not.If PIT not required why fill space with trn logs.I guess there wont always be issue with log file in simple recovery its just sometimes you can see huge log in simple recovery.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
- Proposed as answer by Fanny Liu Tuesday, March 18, 2014 1:59 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:01 PM -
Its always recommend to set ample space for initial log file size,auto_growth depending on the transaction rate of your database. There are many instances the where you can't control the growth of the log file. Its good idea that keep the transaction as small as possible.
Also try to understand the what is the percentage of log space used and who is contributing. As shanky mentioned find out the open transaction using DBCC OPENTRAN(DB_Name),if any?.
Make sure that log file do not have unused space in it, if so, you can shrink it. DBCC ShrinkFile(Fileid)
DBCC SQLPERF(LOGSPACE)
-Prashanth
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:18 PMAnswerer -
You should shrink the transaction log:
http://www.sqlusa.com/bestpractices2005/shrinklog/
SIMPLE mode is the best choice if FULL mode is not needed for point-in-time recovery, mirroring and so on.
For future, you should track the size. If you see large, sudden increase, investigate.
Monitoring SQL Server database transaction log space By Greg Robidoux
Quote from the Robidoux article: "Problem
Keeping track of your transaction log usage is key to understanding how your database transaction logs are being used as well as keeping track of how much space the transaction logs are using. Depending on the recovery model that is set within your database and also the backup jobs that you run, the size of your transaction log can vary quite a bit. So what is the best way to keep track of the transaction log usage?Solution
SQL Server has a command that you can run to see the current size of the transaction logs and how much space is currently being utilized. The command is DBCC SQLPERF(logspace). This is a very simple command to run and interpret and this is key to getting an understanding of how your transaction logs are being used."
Kalman Toth Database & OLAP Architect SELECT Video Tutorials 4 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012
- Edited by Kalman Toth Monday, March 10, 2014 4:35 PM
- Proposed as answer by Fanny Liu Tuesday, March 18, 2014 1:59 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:33 PM
All replies
-
1.You should surely look at Autogrowth of your log file.I guess it must be set to some 20-30 %.It can be 100-200 MB( tentative value to start)
2. Other thing you should look at open transaction causing log from truncating.Long running transactions can hold log from truncating.
DBCC opentran(dn_name)
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
- Edited by Shanky_621MVP Thursday, March 13, 2014 11:07 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 3:21 PM -
Regarding
>> You should surely look at Autogrowth of your log file
Good point. I had forgotten about that but a check showed the Autogrowth was already set to 1GB.
From the perspective of trans log size, is it better to set a large DB to Full Recovery mode and take hourly trans dump (to keep the trans log size small) or keep the DB in Simple Recovery mode? (which supposedly has minimal impact on log growth)
edm2
Monday, March 10, 2014 3:51 PM -
The recovery model affects the reuse of the space log file, not the size of the log file. The log file is that big because it needed to be that size at one time. The log file does not physically get smaller unless you shrink it manually.
Monday, March 10, 2014 3:54 PMAnswerer -
Tom,
>>>> The log file does not physically get smaller unless you shrink it manually.
Oops. And I know it is generally frowned own to shrink the physical log file due to disk fragmentation.
edm2
Monday, March 10, 2014 3:57 PM -
1. Good point. I had forgotten about that but a check showed the Autogrowth was already set to 1GB.
2. From the perspective of trans log size, is it better to set a large DB to Full Recovery mode and take hourly trans dump (to keep the trans log size small) or keep the DB in Simple Recovery mode? (which supposedly has minimal impact on log growth)
I guess 1 G is large to set for log file.I suggest you check this value ,I am sure you would not require 1 G autogrowth .
Reg second question.It depends on whether you want point in time(PIT) recovery or not.If PIT not required why fill space with trn logs.I guess there wont always be issue with log file in simple recovery its just sometimes you can see huge log in simple recovery.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
- Proposed as answer by Fanny Liu Tuesday, March 18, 2014 1:59 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:01 PM -
Its always recommend to set ample space for initial log file size,auto_growth depending on the transaction rate of your database. There are many instances the where you can't control the growth of the log file. Its good idea that keep the transaction as small as possible.
Also try to understand the what is the percentage of log space used and who is contributing. As shanky mentioned find out the open transaction using DBCC OPENTRAN(DB_Name),if any?.
Make sure that log file do not have unused space in it, if so, you can shrink it. DBCC ShrinkFile(Fileid)
DBCC SQLPERF(LOGSPACE)
-Prashanth
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:18 PMAnswerer -
You should shrink the transaction log:
http://www.sqlusa.com/bestpractices2005/shrinklog/
SIMPLE mode is the best choice if FULL mode is not needed for point-in-time recovery, mirroring and so on.
For future, you should track the size. If you see large, sudden increase, investigate.
Monitoring SQL Server database transaction log space By Greg Robidoux
Quote from the Robidoux article: "Problem
Keeping track of your transaction log usage is key to understanding how your database transaction logs are being used as well as keeping track of how much space the transaction logs are using. Depending on the recovery model that is set within your database and also the backup jobs that you run, the size of your transaction log can vary quite a bit. So what is the best way to keep track of the transaction log usage?Solution
SQL Server has a command that you can run to see the current size of the transaction logs and how much space is currently being utilized. The command is DBCC SQLPERF(logspace). This is a very simple command to run and interpret and this is key to getting an understanding of how your transaction logs are being used."
Kalman Toth Database & OLAP Architect SELECT Video Tutorials 4 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012
- Edited by Kalman Toth Monday, March 10, 2014 4:35 PM
- Proposed as answer by Fanny Liu Tuesday, March 18, 2014 1:59 AM
- Marked as answer by Ed Price - MSFTMicrosoft employee Monday, March 24, 2014 5:54 PM
Monday, March 10, 2014 4:33 PM -
If you want the physical space back, you have no choice but to shrink the file and deal with the consequences.
But, before you do that, you need to figure out a reasonable size for the log file and shrink it to that size.
Monday, March 10, 2014 4:33 PMAnswerer -
The answer I looked upon is talking about Autogrowth,
Remember, autogrowth setting is clearly depends upon your recovery model, i/o activities and requirements. Setting up it as a lesser number will force SQL server to expand the size frequently [which will create an extra overhead to SQL Server].
the better way to control log file shrinking is to use AutoShrink
First check if your database is already set to use autoshrink
sp_msforeachdb @command1 = 'Select ''[?]'',DATABASEPROPERTYEX(''?'',''IsAutoShrink'')'
You can use Log File Auto Shrink Class for your purpose.
Hope this will help.
Tuesday, March 11, 2014 1:42 PM -
The answer I looked upon is talking about Autogrowth,
Remember, autogrowth setting is clearly depends upon your recovery model, i/o activities and requirements. Setting up it as a lesser number will force SQL server to expand the size frequently [which will create an extra overhead to SQL Server].
the better way to control log file shrinking is to use AutoShrink
First check if your database is already set to use autoshrink
sp_msforeachdb @command1 = 'Select ''[?]'',DATABASEPROPERTYEX(''?'',''IsAutoShrink'')'
You can use Log File Auto Shrink Class for your purpose.
Hope this will help.
Wow giving an advise on performance about autogrowth and asking to keep autoshrink ON for log files.Seems you missed apoint that regular Log file shrink is also an eveil operation .As eventually when logs will grow there might be perforamnce impact as Instant file initilization is not for log files.So growth might be slow and owerall writng to log will be slow.
Autogrowth fro log file set to 1 G is large ,but I agree to fact that it should be calculated based on surrounding but startup value of 200MB is not bad for log files as compared to 1 G
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
Tuesday, March 11, 2014 1:58 PM