locked
Recover Tempdb without sql server restart RRS feed

  • Question

  • My Tempdb is full in production and multiple databases and applications are accessing the server, i am planning to recover this problem by adding additional files(ldf, ndf) to tempdb in other drive by executing the command 

    ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev2',FILENAME = N'D:\Data\tempdev2.ndf' ,

    SIZE = 512MB , FILEGROWTH = 256MB)

    GO

    Alter DATABASE tempdb ADD FILE (NAME= N'Tempdev2', FILENAME = N'D:\Log\tempdev2.ldf' ,

    SIZE = 512MB , FILEGROWTH = 256MB)

    But without restart the server it wont reflect. Do we have any options to fix the issue without server restart? Because it is a production server running 24/7 support.

    Friday, February 17, 2017 6:32 AM

Answers

  • Hello,

    The changes takes effect, but as long as free space is available in primary (first) data file, SQL Server won't use the secondary files.

    Clear the content of the first data file with DBCC SHRINKFILE (Transact-SQL) + parameter EMPTYFILE, then set a max file size on primary file.

    SQL Server use them in "round robin" procedure, as soon out is full it resize the next and use it then.


    Olaf Helper

    [ Blog] [ Xing] [ MVP]

    Friday, February 17, 2017 7:31 AM
    Answerer

All replies

  • Hello,

    The changes takes effect, but as long as free space is available in primary (first) data file, SQL Server won't use the secondary files.

    Clear the content of the first data file with DBCC SHRINKFILE (Transact-SQL) + parameter EMPTYFILE, then set a max file size on primary file.

    SQL Server use them in "round robin" procedure, as soon out is full it resize the next and use it then.


    Olaf Helper

    [ Blog] [ Xing] [ MVP]

    Friday, February 17, 2017 7:31 AM
    Answerer
  • The changes WILL take affect but the catalogs would be updated after restart this is what I believe in your case but AFAIK the chnages should reflect immediately. BTW which catalog you are querying you must query sys.database_files or sys.master_files to check whether files are added.

    Cheers,

    Shashank

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My TechNet Wiki Articles

    MVP

    Friday, February 17, 2017 8:08 AM
    Answerer
  • Yes, you can shrink "DBCC SHRINKFILE", if the size is not reduced, free procedure cache "DBCC FREEPROCCACHE".  You can run this more than one time to get the reasonable size.

    Note: Freeproccache will clear the procedure cache and will cache the data newly.



    Muthukkumaran Kaliyamoorthy

    Helping SQL DBAs and Developers >>> SqlserverBlogForum

    Friday, February 17, 2017 8:26 AM
  • In my case the MDF for the temp reached its size and it is not growing because of the disk space, I am creating additional ndf and ldf and planning to place it to other disk. But it is not reflecting immediately.
    Friday, February 17, 2017 9:17 AM
  • DBCC shrink will shrink file to its initially allocated space, but if the data is available it will not shrink beyond that. My MDF is fully occupied with data to its maximum size.
    Friday, February 17, 2017 9:19 AM
  • In my case the MDF for the temp reached its size and it is not growing because of the disk space, I am creating additional ndf and ldf and planning to place it to other disk. But it is not reflecting immediately.

    What do you mean by not reflecting immediately ? Are you doing it correctly ?

    Can you show me via screenshot. Does below does not show you additional files.

    use tempdb
    go
    select * from sys.database_files
    Were you successfully able to create the ndf files ?



    Cheers,

    Shashank

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My TechNet Wiki Articles

    MVP

    Friday, February 17, 2017 9:42 AM
    Answerer
  • DBCC shrink will shrink file to its initially allocated space, but if the data is available it will not shrink beyond that. My MDF is fully occupied with data to its maximum size.

    Have you find what is allocated to the MDF.

    This may help: http://www.sqlserverblogforum.com/2016/09/tempdb-database-is-full/



    Muthukkumaran Kaliyamoorthy

    Helping SQL DBAs and Developers >>> SqlserverBlogForum

    Friday, February 17, 2017 11:23 AM
  • Temp db mdf files are fully occupied by #temp tables created by our stored procedures. We created temp tables inside the stored procedures for our requirement. For changing these temp into cte and other optimizations are planned by us and a team is working for the optimization, we are need to support our existing version which uses temp tables.
    Friday, February 17, 2017 1:10 PM
  • What does below statement return? You mention that you want to add more files, so this  query to check for allocation page contention in tempDB

     Select session_id,
    wait_type,
    wait_duration_ms,
    blocking_session_id,
    resource_description,
          ResourceType = Case
    When Cast(Right(resource_description, Len(resource_description) - Charindex(':', resource_description, 3)) As Int) - 1 % 8088 = 0 Then 'Is PFS Page'
                When Cast(Right(resource_description, Len(resource_description) - Charindex(':', resource_description, 3)) As Int) - 2 % 511232 = 0 Then 'Is GAM Page'
                When Cast(Right(resource_description, Len(resource_description) - Charindex(':', resource_description, 3)) As Int) - 3 % 511232 = 0 Then 'Is SGAM Page'
                Else 'Is Not PFS, GAM, or SGAM page' 
                End
    From sys.dm_os_waiting_tasks
    Where wait_type Like 'PAGE%LATCH_%'
    And resource_description Like '2:%'


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, February 19, 2017 6:23 AM
  • Ndf Files has been successfully created, but it wont use the file without server restart.
    Monday, February 20, 2017 12:57 PM
  • Wont return any results.
    Monday, February 20, 2017 12:59 PM
  • It wont requires restart?
    Monday, February 20, 2017 1:02 PM
  • No, i am getting help from people but my issue is not fixing. Finally we restarted the server and tempdb successfully reinitialized with an outage of 60 minutes production downtime.
    Monday, February 20, 2017 1:04 PM
  • Free procedure cache has no impact in size reduce.
    Monday, February 20, 2017 1:04 PM
  • No, i am getting help from people but my issue is not fixing. Finally we restarted the server and tempdb successfully reinitialized with an outage of 60 minutes production downtime.
    We already suggested that after restart the affect should take place although I was not quite sure about this.

    Cheers,

    Shashank

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My TechNet Wiki Articles

    MVP

    Monday, February 20, 2017 1:18 PM
    Answerer
  • Surely if you need 24/7 up time you are using a cluster and not a single server for your production databases?

    You would then just need to fail services over the the other node in the cluster and restart the now passive node.

    If you are not using clustering or HA groups then perhaps you might want to consider these for resilience?

    Cheers,


    Kind regards, Kev

    Monday, February 20, 2017 1:32 PM
  • Yes Restart suggested, But the goal is without restart issue needs to be fixed.
    Monday, February 20, 2017 2:11 PM
  • Yes, without restart need to increase the allocate additional secondary files when primary mdf is 100% full
    Monday, February 20, 2017 2:13 PM
  • Dear Olaf Helper,

    With the shrink command it is some what gaining the space, so adding additional ndf will be added . I am planning to try  this approach when my production will face the issue. But i dont want the another down time. Will try if it works without downtime then it is a great.

    Regards,

     Kannan.C


    Monday, February 20, 2017 2:16 PM