none
The log file of my user db is full. Could you please confirm if the below resolves the issue: RRS feed

  • Question

  • Good morning experts,

    The log file of my user db is full. Could you please confirm if the below resolves the issue:

    Take log backup->Change the recovery model from full to simple->shrink the log file->Change the recovery model back to full and take an immediate full backup

    Please let me know otherwise.


    Kiran

    Sunday, September 22, 2019 3:14 AM

All replies

  • Good morning experts,

    The log file of my user db is full. Could you please confirm if the below resolves the issue:

    Take log backup->Change the recovery model from full to simple->shrink the log file->Change the recovery model back to full and take an immediate full backup

    Please let me know otherwise.


    Kiran

    Good day Kiran,

    We cannot know confirm if the below resolves the issue since we have no idea what are your requirements and we are not fully familiar with your system!

    "Change the recovery model from full to simple" has a huge implications!!!

    The first thing you should understand is why you use full recovery model and not simple recovery model. You should design the system according to the way you use the it.

    My guess according to the lack of information in your question is that you have no idea what is full mode or simple mode and you never backed up your log, but I might be wrong

    You should go over the following document and learn the topic from start to end
    https://www.mssqltips.com/sqlservertip/5343/understanding-sql-server-recovery-models-and-transaction-log-use/

    This is only a short review but it should give you the basic understanding.
    In general, if you do not backup the log file then there is no meaning to use full model since the log will keep grow


    signature   Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]    [Linkedin]

    Sunday, September 22, 2019 3:32 AM
    Moderator
  • Kiran

    Why not just simple shrink the log file? If it does not , you can issue BACKUP LOG dbname TO DISK='NUL' ---fewtimes

    and then shrink again


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, September 22, 2019 4:15 AM
    Moderator
  • Hi

    There are various options to handle this scenario and below blog explains the same in a detailed way.

    https://docs.microsoft.com/en-us/sql/relational-databases/logs/troubleshoot-a-full-transaction-log-sql-server-error-9002?view=sql-server-2017

    However Shrinking is an direct and easy-go option which we cannot suggest as we are not aware of your requirements.

    Hope this is helpful !!

    Thank you


    If this post helps to resolve your issue, please click the "Mark as Answer" of that post and/or click Answered "Vote as helpful" button of that post. By marking a post as Answered and/or Helpful, you help others find the answer faster.

    Sunday, September 22, 2019 6:52 AM
  • The log file of my user db is full. Could you please confirm if the below resolves the issue:

    Take log backup->Change the recovery model from full to simple->shrink the log file->Change the recovery model back to full and take an immediate full backup

    Short answer: you are asking the wrong question.

    You need to understand the implications of setting the database to simple recovery or doing something like BACKUP LOG dbname TO DISK='NUL' as suggested by Uri (an utterly bad piece of advice in my opinion). This means that you break the log chain. "But I'm taking a full back immediately, so what is the problem?" The problem is that there may already be corruption in the database which you have not noticed. Or that the backup you take is corrupted because the hardware you write the backup to is faulty.

    It is not uncommon in the case of a disaster that you find that most recent backup also has corruption, and you have to go back several days. But if you don't have an unbroken log chain at this point, you cannot perform an up-to-the-point recovery.

    Now, it is far from always that this is required. Maybe users are perfectly happy if you restored a week-old backup, because they have their own transaction log on paper and can re-enter data. (I have encountered this situation!) Or maybe it is a dev or test database where losing a few days of data is not an issue, and it is more important to resolve the log-full issue in a simple manner.

    But you did not tell us about these details, and this indicates that you have not thought about it. That's why I'm saying that you are asking the wrong question. Maybe the steps you suggest will resolve the urgent issue, but as I've outlined above, it can cause much more problems further down the road.

    And, no, I'm not finished yet. Shrinking? Yes, shrinking the log file is correct, if you on why the log exploded and you know that was an exceptional issue. But if the log file got full during normal work operation, there is little point in shrinking it - it will grow again. Instead you need to get more disk to host your transaction log. To resolve the urgent matter, you would may be add a second log file on a different disk. But once you have acquired more disk space, you consolidate the log files into a single one on a single disk. But it could also be that you can extend the RAID (if the file is on a RAID), or extend the LUN (if the disk on a SAN).


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Sunday, September 22, 2019 9:44 AM
    Moderator
  • Take log backup->Change the recovery model from full to simple->shrink the log file->Change the recovery model back to full and take an immediate full backup

    These actions may or may not resolve the full transaction log problem but there is not need to change the recovery model to SIMPLE.

    The log backup alone may resolve the full log issue if the cause is because the log is not backed up frequently enough. The proper solution in this case is to schedule transaction backups more often and ensure the log is sized to accommodate all activity between log backups.

    If the full log is due to a long-running transaction, you should expand instead of shrink the log. Regardless of recovery model, the transaction log must be sized to accommodate the largest and longest-running transaction.


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com


    Sunday, September 22, 2019 11:35 AM
    Moderator
  •  >>>BACKUP LOG dbname TO DISK='NUL' as suggested by Uri (an utterly bad piece of advice in my opinion). This >>>means that you break the log chain. 

    use master
    go
    create database a
    on (name=a_dat, filename='e:\a_dat.mdf', size=1mb, filegrowth=1mb)
    log on (name=a_log, filename='e:\a_log.ldf', size=1mb, filegrowth=1mb )
    go
    use a
    go
    create table t1 (c1 int)
    go

    insert into t1 values (1)
    go

    use a
    go

    /* delete the row from t1. This represents the mistake we want to recover
    before.. */
    delete from t1
    go
    use master
    go

    backup database a to disk='e:\a_bak1.bak'
    go

    backup log a to disk ='e:\a_bak3.bak'
    backup log a to disk='nul'
    go

    /* clean up
    use master
    go
    drop database a
    go
     */

    use master
    restore database a from disk='e:\a_bak1.bak' with norecovery
    restore log a from disk='e:\a_bak3.bak' with recovery
    go
    use a
    go
    /* prove that the deleted row from t1 is restored */
    select * from t1
    go


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, September 22, 2019 11:51 AM
    Moderator
  • Yes, Uri, we can of course recover to points in time before we set the database to simple recovery of backed up the transaction log to NUL. But we cannot recover from an old backup to a point after this action.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Sunday, September 22, 2019 12:01 PM
    Moderator
  • We talked about to free up the log space , recovery?  Do you see signals about corruption in the OP post?

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, September 22, 2019 12:07 PM
    Moderator
  • We talked about to free up the log space , recovery?  Do you see signals about corruption in the OP post?

    If you are care to read my post, my point was that there can be corruption that has not been discovered yet.

    Breaking the log chain is nothing you should do lightly if you have a critical database for which your acceptance of data loss is minimal.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Sunday, September 22, 2019 12:45 PM
    Moderator
  • Hi,

    Erland is correct.

    if you perform backup to a nul device, then all subsequent differential backups will be useless (using official solutions) from this point, until you have a non-nul full backup. Your example uses the backup before you performed the full backup to the nul device, which obviously will work, but what about the next steps?

    * By the way in your example the first backup is done after you delete the data and you will not get the deleted row from t1 after restoring.

    USE master
    GO
    DROP DATABASE IF EXISTS a;
    GO
    CREATE DATABASE a
    GO
    USE a
    GO
    CREATE TABLE t1 (c1 int)
    GO
    
    ---------- First full backup to start the chain
    USE master
    GO
    BACKUP DATABASE a to 
    	disk='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak1.bak'
    GO
    
    ---------- 
    USE a
    GO
    insert into t1 values (1)
    go
    ---------- backup the log provide a point in time we can restore to
    USE master
    GO
    backup log a to 
    	disk ='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak2.bak'
    GO
    
    ---------- 
    USE a
    GO
    insert into t1 values (2)
    GO
    
    ---------- Backup to a nul device! 
    ---------- Therefore, all subsequent differential/log backups are useless from this point using simple restore
    USE master
    GO
    backup log a to disk='nul'
    GO --- Your data is not actually backed up! This is a huge issue.
    
    -- You can add more actions on the database a before next backup
    -- These will be lost for simple restore
    -- unless you start a new backup chain.
    backup log a to disk ='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak3.bak' GO -- this is useless for simple restore! ---------- even so we did backup the log before the next delete ---------- we will not be able to restore it directly using the last backup USE a GO delete from t1 where c1 = 2 GO ------------- clean up USE master GO DROP DATABASE a GO --------- We have a backup before the delete... will it work? --------- can we get the second row before the delete? RESTORE DATABASE a FROM disk='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak1.bak' with norecovery GO -- OK restore log a from disk='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak2.bak' with norecovery GO -- OK restore log a from disk='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\Backup\a_bak3.bak' with recovery GO -- ERROR -- The log in this backup set begins at LSN 37000000041600001, -- which is too recent to apply to the database. -- An earlier log backup that includes LSN 37000000037600001 can be restored. --------- We can restore only to the point in time before the backup to a nul device! /********************** Undocumented: we actually can read the data from the third backup after the backup to a nul device and probably we can use it to get the deleted data of the second row but this is advanced topic and not supported action ***************************************************/ -- we can recover the log to the point-in-time BEFORE the backup to a nul device restore log a with recovery GO USE a GO select * from t1 go -- we lost the second row

     

    In any case! Even if you do not care about restoring (WHICH MAKE NO SENSE!) then we’re still not fixing anything here probably. If something caused the log file to grow then it will probably grow again. Using backup to a nul device will not solve the source issue in this case.

    Moving to simple recovery model can fit if simple recovery model fit your needs, but moving to simple just for reducing the size every X days and back to FULL make no sense for the same reason.

    The solution should be to design the system in the right way according to the way we use the database and the way the database "behaves".


    signature   Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]    [Linkedin]



    Sunday, September 22, 2019 11:17 PM
    Moderator
  • Hello Friend,

    Looks answering directly: Yes. This should solve your problem.
    You will not have data loss for this moment and will clear your Log.

    But, the question you should raise is why this log is getting full.

    You must create a Full (+ differential) + Backup Tlog backup routine.
    This routine could be (suggestion but it depends on your business and infrastructure):

    Full Daily Backup
    Backup Diff during the day. maybe 12 noon.
    Backup Tlog every 1 hour or every 30 min or less, this will depend on your business need.

    Changing the Database Recovey Model is not a best practice for a DBA.

    Having the tlog backup routine will prevent Archilo LFD from filling up infinitely if it has no size limit setting and is not an Express version that also has a 10GB limit.

    In this case, think a little about the tips of the staff here in the forum and create a strategy to keep your backups up to date and do not have to make those triggers that at some point even serve if you have no choice.

    If this answer has helped you, mark it helpful so that someone else with a question or similar problem can find an answer or help more easily. Jefferson Clyton Pereira da Silva - [MCSA | MCP | MCTS | MTA | Database Analyst - Sql Server and Oracle]


    Monday, September 23, 2019 12:03 AM
  • >>* By the way in your example the first backup is done after you delete the data >>>and you will not get the deleted row from t1 after restoring.

    It is probably copy paste mistake , the point was that you can restore the database to the point before you issue backup to NUL. 

    >>>>Using backup to a nul device will not solve the source issue in this case

    You can immediately run full backup  after that statement, btw the op did not even mention that he implements log backups , does he? Ronen  I think before making some speculations  we need probably to ask him about the whole design and  then suggest  the correct solution

    >>>The solution should be to design the system in the right way according to the >>>>way we use the database and the way the database "behaves".

    Log file may grow , sure you can control it but not always you control the client side designs  , and thus the correct solution is applied to that specific client who does not tell us the whole story.


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Monday, September 23, 2019 4:14 AM
    Moderator
  • Hi,

    >> It is probably copy paste mistake

    Make sense 😀✔
    It was clear that it is a simple issue like this

    >> the point was that you can restore the database to the point before you issue backup to NUL.

    to the point of time of your last backup before you issue backup to NUL
    This is not related to the "backup to NUL" but to the fact that you have another earlier backup.
    This is true obviously ✔

    >> You can immediately run full backup

    Obviously you can do it
    By this you start a new chain (as I mentioned few times in the response, before my code and during the code). With that being said, this is not so different from what the OP came with: moving to simple recovery and back to full recovery and start new chain... This make no sense probably for regular maintenance operations! This implies (shouts) a very bad design of the system, and might fit in a VERY rare cases for one-time-action in specific cases.

    >> Ronen  I think before making some speculations

    ❌I DID NOT MADE ANY ASSUMPTIONS, which is a huge important point.
    This was NOT my respond to the OP but to the discussions after. The OP already got my answer in the first response to his question, which is:

    We cannot know confirm if the below resolves the issue since we have no idea what are your requirements and we are not fully familiar with your system!...You should go over the following document and learn the topic from start to end

    In fact, the "in this case" was taken out of context and was not related to the OP case, but to the scenario I presented in the message. This sentence came after presenting the case I am talking about which is "If something caused the log file to grow then it will probably grow again."

    The mentioned case is that the log file keep growing each time!

    >> we need probably to ask him about the whole design and  then suggest  the correct solution

    We cannot do it well in my opinion. I am against given architecture advice over the forums usually. I think that we cannot learn the full system in several messages in the forum, and it is very bad idea to give specific 'absolute advice' regarding architecture in the forums (Usually), which is why I said above that I DID NOT MADE ANY ASSUMPTIONS, an d that this is a huge important point.

    >> but not always you control the client side designs

    Not sure I understand what you mean here and who is the client in your message. My clients are these who own the servers usually as I am an external consultant focusing on the architecture and designing the system (usually). In most other cases the client is the end-user who use the database, or the application which connect to the database...

    What is clear to me is that, If the architect of the system cannot design the system well according to the system, the way the database is used, and the way the database "behaves"... then you should probably replace him ASAP. This is the job of the architect by definition, to learn the system fully and designing the system to cover all optional scenarios, and if something new happens which was not covered then re-design as needed or at least know and present to the right people what are the limitations.


    signature   Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]    [Linkedin]



    Monday, September 23, 2019 6:33 AM
    Moderator
  • Hi juniorkiran,

    You can’t easily switch to and from SIMPLE recovery if you’re trying to maintain a sequence of log backups. Switching into SIMPLE is no problem, but when you switch back to FULL or BULK_LOGGED, you should first make a complete database backup for the change in behavior to be complete.

    SIMPLE recovery model isn’t recommended for production databases, where you need maximum transaction recoverability. The only time that SIMPLE recovery is really useful is in test and development situations or for small databases that are primarily read-only.

     

    Switching to SIMPLE recovery mode and doing log backup will  truncate log in the same way, you don’t have to shrink log files in SIMPLE recovery mode.

    So you can shrink the log file without switching to SIMPLE recovery model. However, if the log is never backed up, none of the VLFs are marked as reusable, so no shrinking can take place.

    Help those could help you.

    Best Regards,

    Amelia Gu



    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Monday, September 23, 2019 8:45 AM