none
tempdb Log file

    Question

  • Hello when I start executing a transact SQL process, my tempdb file is about 5Mb and It's log file is 1Mb, when the process finishes the log file never gets cleaned again... so It uses a lot of hard disk space and I can't run the process again...

    Pd. the process has a "begin tran" and a "commit" at the end...

     

    Could you tell me what to do ???

    Thursday, March 22, 2007 11:32 PM

Answers

  • When you Stop/Start the SQL Server service, the tempdb database file and log files are reset to their startup values.
    Thursday, March 22, 2007 11:40 PM

All replies

  • When you Stop/Start the SQL Server service, the tempdb database file and log files are reset to their startup values.
    Thursday, March 22, 2007 11:40 PM
  • And also you might want to increase the size of the temdb (rule of thumb is to have 25 % of the database size but my feeling is it is depend upon your transaction volume).

     

     

    Friday, March 23, 2007 8:27 AM
  • How do you recommend one stop/start SQL Server service on an OLTP db that must be on line 24/7?

    Might one be better off truncating the tempdb as part of a daily maintenance plan?

    Friday, March 23, 2007 12:18 PM
  • On a 24/7 availability system you size your tempdb for what is required to run the system.  Generally if you are having to reduce it in size then you are taking away space that it needs during normal processing and it will need to reclaim that space (assuming it is available) later.  If the space is not available then it will stop (at least the user process and possibly the SQL Server). 

    In SQL Server 2005 tempdb is more heavily used and so generally needs more space than it did before.  Here is a link to a Microsoft white paper that goes into more details.

    http://www.microsoft.com/technet/prodtechnol/sql/2005/workingwithtempdb.mspx

    It does include information on shrinking tempdb if that is what you decide to do (but it does recommend against it).

    Friday, March 23, 2007 12:43 PM
  • The question was about the log size.  Sorry I wasn't clear.  I'm not sure why the tempdb needs a transaction log.
    Friday, March 23, 2007 1:30 PM
  • The log file is used to track the changes to the database at the page allocation and the data levels.  In Full or Bulk-Logged recovery mode this includes the ability to use the logs to reproduce these changes to bring a backup of the database up to date.  In these modes the logs grow until backed up (hopefully) and truncated.

    However even when a database is in Simple Recovery mode it is still used to track the changes to be written to disk until they are completed and allow the management of transactions within the database (though the log is truncated every checkpoint to remove committed log records).  This is inherent in the way the storage engine handles database file access.

    tempdb is a special case of a database in Simple recovery mode.  Even though the data in tempdb does not need to be persistant between invocations of the SQL Server Service, it does need to be kept consistent and coherent during the running of SQL Server.  If there are long running queries and transactions (using tempdb to cache information) then the log file for tempdb can grow quite large.  If these queries are regular occurrances then you are better to leave the logs at that size.

    Friday, March 23, 2007 2:01 PM
  • Our databases are around 50GB in total

    Since SQL2005 utilitizes tempdb more than 2000, and recommend 1 tempdb file/log PER CPU (including dual-core)

    We have 4 x 1GB tempdb DATA + 4 x 1GB tempdb LOG files (total 8 GB) on our Xeon dual-core machines (2 physical CPU, 4 cores)

    so far no problems, but then I am not 100% certain whether it's overkill or underpower. At least they haven't need to grow thus far

    Friday, March 23, 2007 2:19 PM
  • Thanks.

    So one would probably leave system db's in Simple Recovery mode & user db's in Full or Bulk Logged.  And the Tempdb log file will be taken care of by SQL Server.

    Friday, March 23, 2007 3:10 PM
  • If the user DBs are transactional (updated through the day) then probably yes (be aware there are certain limitations on restoration when using the BULK_LOGGED Recovery Model).  However if you have batch built reporting databases then you can also place those in Simple Recovery mode and perform full (and/or incremental) backups at the end of the load process (there should then be no further changes until the next load).

    In terms of the system databases you may well want msdb in full (and archive its logs regularly), this stores a lot of the information for agent and backup.  The Model database wants to have the default recovery mode for newly created database (as they inherit it from Model).  The system overrides this when it creates the new tempdb on startup and forces that to have a Simple recovery model.

    Another thing to note about tempdb is that there is a recommendation in very active systems to consider placing tempdb on its own Raid 10 disk array.  This gives it high read and write speeds with fault tolerance.

    Friday, March 23, 2007 4:00 PM
  • I notice you said that "If there are long running queries and transactions (using tempdb to cache information) then the log file for tempdb can grow quite large." And I have a problem that might be caused by this reason. do you mean either "long running queries" or "transactions" can cause the log file of temp db grow large? because my system is doing a long query which select a lot of data from user tables into temp tables and the temp db log file size goes up to a very large size (50G bytes). there is no transactions here.

     

    can you explain more about why the log file of temp db go that large? we are querying one third of data from a 120 G bytes database and insert into the temp db, yes it is huge, but this is log file right? why it is log file growing large? The sql is like this:

     

    --create tmp table

    create #tmpTable ...

    --filter data into tmp table

    select *

    into #tmpTable

    from userTable ut

    where ut.timestamp < @StartTime (this condition covers one third of the data in the table)

    -- generate a report based on the data in #tmpTable

    ...

     

    can you think about the possible reason why the log file size goes to so large?

    Thursday, October 23, 2008 1:51 AM