none
'Could not allocate space for object 'dbo.SORT temporary run storage'

    Question

  • Hello all,
    I’m trying to transfer/transform some data from one table to another, both on the same database. The source table holds 92M records.
    To do so I'm using the next statement:
     
    INSERT INTO [dbo].[Messages1]
               ([Time] ,[ID] ,[ResponseTo] ,[MessageMD5] ,[source] ,[dest]
               ,[arln_err] ,[ErrorDescription] ,[ErrorNumber] ,[ErrorSource]
               ,[ErrorType] ,[HttpCall] ,[HttpStatus] ,[QuoteAmount]
               ,[ReservationPickupLocation] ,[RatePickupLocationDescription] ,[RateReqCarType] ,[RateReqPickupLocation]
               ,[RejectMessage] ,[ReservationAmount] ,[ReservationCarType] ,[ReservationCarTypeDescription]
               ,[RatePickupLocation] ,[resp1] ,[ResultNum] ,[strRejectMessage]
               ,[strResultNum])
    SELECT [Time] ,[ID] ,[ResponseTo] ,[MessageMD5] ,[source] ,[dest]
          ,[EE01] ,[EE02] ,[EE03] ,[EE04]
          ,[EE05] ,[EE06] ,[EE07] ,cast([EE08] as float)
          ,[EE09] ,[EE10] ,[EE11] ,[EE12]
          ,[EE13] ,cast ([EE14] as float) ,[EE15] ,[EE16]
          ,[EE17] ,[EE18] ,[EE19] ,[EE20]
          ,[EE21]
      FROM [dbo].[Messages]

     
    And I’m getting next exception:
     
    Msg 1105, Level 17, State 2, Line 1
    Could not allocate space for object 'dbo.SORT temporary run storage:  185394470715392' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
    Msg 9002, Level 17, State 4, Line 1
    The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
     
    I’m having enough disk space, and the PRIMARY and log files are set to auto grow. I also try to add aditionary log files – with no success.
     
    How should I perform this task? How do I stop the database logging for this action or even for the whole database?
     
    Regards
                Ronen S.
    Monday, January 15, 2007 11:29 PM

Answers

  • The error messages talked about tempdb space usage.  It indicated that the tempdb data and log file run out of space. 

    If you are sure that you have enough space for tempdb data and log file, then the problem probably will be that you do not have autogrow or your tempdb files reached to its size limit. 

    Thanks

    Stephen

    Saturday, January 20, 2007 5:44 AM

All replies

  • i will be both tempdb and your data/log file of the user database is growing, and then you run out of disk somewhere.  Try rerunnig and monitoring the disks the db files reside on.

    Another thing you can do is break up the insert into smaller-sized transactions, i.e. insert ranges, maybe one or two million at a time.  That will be more performant and should be faster.

    Tuesday, January 16, 2007 12:21 AM
  • Hello Greg,

    It looks to me the problem is while creating temporary table space for the primary key sort.

    The statement fail after running for 4 minutes, so the disk space is not running out (I have 52G of free disk). anyhow - how can I monitor table/log size?

    About your second suggestion how can I take a million records in a time? is there a something like Oracle psaudo column ROWNUM?

    Thanks in advance.

    Ronen S.

    Tuesday, January 16, 2007 12:32 AM
  • The error messages talked about tempdb space usage.  It indicated that the tempdb data and log file run out of space. 

    If you are sure that you have enough space for tempdb data and log file, then the problem probably will be that you do not have autogrow or your tempdb files reached to its size limit. 

    Thanks

    Stephen

    Saturday, January 20, 2007 5:44 AM
  • I would BCP your data out to a file and then BCP the data back in to a table with no indexes.

    Then create you indexes after.

    Sunday, January 21, 2007 10:53 AM
  • I'm having a similar problem, and the post marked as the 'answer' just re-states the error message, basically.  How do you check to see if your database is set for autogrowth?  How do you change it if it's not?  How do you clear a full transaction log?  And what the heck is "BCP"?  I have a 3 million row table in my database and having issues running queries against it - never had to deal with this large of a database table before.

     

    I don't mean to sound ungrateful, but why do people always wait for someone to ask these kinds of questions in response to an answer, instead of just saying how to do everything when they are answering the question the first time?

     

    -Tom

    Tuesday, October 28, 2008 2:35 PM
  • Right click on the database in management studio and click properties.

     

    In there on the files section you will see how the files are configure.

     

    To clear a full transaction log back it up. Right click on the database, select tasks and then backup. In the wizard make sure you select transaction log as the backup type. Add a new destination (click on the Add button) once you have a file configured you can click OK.

     

    If you don't want to have point in time recovery change your recovery model (in database properties) to simple. That way the transaction log will be cleared of records relating to closed transactions when a checkpoint occurs. (this is done automatically by the server)

     

    BCP (bulk copy) is a program for exporting data

     

     

    Tuesday, October 28, 2008 3:03 PM
  • I found a solution on another site:

     

    http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22064393.html

     

    From that, I used these SQL commands:

     

    USE [master]
    GO
    ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdb', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\MyTempFile.ndf' , SIZE = 102400KB , FILEGROWTH = 10240KB )
    GO

     

    And then I could run queries.

     

    But so how do you control the growth, as in, can you delete the file and SQL will auto-re-create it?  Or is it one of those deals where if you delete it and SQL looks for it, SQL will blow up or have issues because it's not there anymore?  Is there a way to manage this file (i.e. delete & re-create it after it reaches a certain limit) programmatically, within SQL?  And I still don't know what BCP is...

     

    -Tom

    Tuesday, October 28, 2008 3:04 PM
  • All this does is add a new file to your database. A bit like adding more pages to a book. SQL is storing your data in these files so you can't just delete it otherwise you will loose your data.

     

    SQL manages space in these files, so if you delete space it will be reused.

    Tuesday, October 28, 2008 3:08 PM
  • Thank you for your quick responses.

     

    Thing is, there shouldn't be more data to store for the database table I'm using.

     

    I'm implementing a circular queue of 3 million rows.  The table is created and rows filled by a C# .NET console app.  As the program checks for new data to put into the table, the rows are updated based on if they are marked "Free".  If no rows are "Free" anymore, it will check for the oldest record and update on top of that one.

     

    If it's just query logging that is happening, and that is what this extra file is for and doing, I don't really care about logging that and I don't want to have to add more and more files to my database to accomodate that.  My SQL instance is currently installed on a 20 GB virtual disk - they might let it be grown up to 30, but anything more and they might start asking us why we're taking up so much space on the SAN.  Is there a solution to that, to prevent SQL from eating up disk space with log data, if that's what this is, as I believe?

     

    Thanks,

    Tom

    Tuesday, October 28, 2008 3:19 PM
  • The transaction log holds the changes you make to your data. If it takes up 10Gb then you have modified that amount of data minus overhead for the information required by the transaction.

     

    Tuesday, October 28, 2008 3:24 PM
  • Ok, I think what you've been saying in the past couple of posts just clicked...

     

    If I want to manage this log, I have to set it as restricted file growth (and set a value) in the Database Properties > Files section (and click the ellipsis by the description), and then, under the "Transaction Log Shipping" section, create a backup schedule that will backup (and clear the current) log.  But then that backup location has to be managed.  Can those backups be safely deleted after the log has been backed up (and therefore cleared)?  Basically my program can rebuild the database and data in 2 hours programmatically, and will have to do so, anyway, on any kind of failure, so I don't care about backing up the data.

     

    Thanks,

    Tom

     

    Tuesday, October 28, 2008 3:34 PM
  • You should be able to restore from backup in under 2 hours. However if you are happy with using your tool to restore the database and as you don't know about database backups then you should set your recovery mode to simple.

     

    The simple view of how a SQL database works is this.

     

    Your data is stored in tables, which are stored as pages in a file. If you make changes to a row, the information about that change is stored in the transaction log. SQL will apply the changes from the transaction log to the data files during a process called a checkpoint. Once the changes are finally written to disk the changes will stay in the transaction log until you back it up or you have simple recovery model set.

     

    So why back it up. If you want to recover to 11 o'clock today then you need to restore the database from the last back and then reapply the transactions from teh transaction log to get to that time. Thats why the transactions stay in the transaction log until it is backed up.

     

    If you don't need point in time recovery then setting simple recovery model says to SQL you don't want to be able to restore the transactions and therefore once the pages are written to the data files the transactions are removed from the transaction log.

    Tuesday, October 28, 2008 3:42 PM
  • Thank you very much in helping me understand all this...

     

    You are right that I then won't need transaction log shipping/backup.  My program has to both update the SQL database table and send files to another server per update, so if there's a catastrophic failure, it's easier to just re-initialize my program to re-create the table and send the latest file updates to the other server also, than any kind of SQL recovery that won't help get the other server in sync with it.  The reason it will take the 2 hours is it will drop & re-create the 3 million rows, and then find metadata from XML files housed in SharePoint, based on a URL date structure (it loops through the dates from today back to the first date where the XML file is there), and send that XML metadata to the other server.  SQL is used to indicate an XML file that was found and recorded.

     

    So how do I set the recovery model to "simple"?

     

    Thanks,
    Tom

    Tuesday, October 28, 2008 3:55 PM
  • Most of this information you should be able to find in Books Online.

     

    So to answer this I suggest you have a look for "simple recovery model" in Books on Line. There is more information and details on how to configure it.

    Tuesday, October 28, 2008 3:59 PM
  • ALTER DATABASE Northwind SET RECOVERY SIMPLE

    GO

     

    Thanks for the help.

     

    -Tom

    Tuesday, October 28, 2008 4:05 PM
  • glad you found the answer

     

    Tuesday, October 28, 2008 4:09 PM
  • Also, from my example where I added a transaction log file using this statement -

     

    ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdb', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\GFeeder.ndf' , SIZE = 102400KB , FILEGROWTH = 10240KB )

    GO

     

    I only just discovered the syntax I needed to remove that file:

     

    USE [tempdb]

    GO

    DBCC SHRINKFILE ( 'tempdb', EMPTYFILE )

    GO

    ALTER DATABASE [tempdb] REMOVE FILE tempdb

    GO

     

    The 'tempdb' in the DBCC SHRINKFILE command reflects the "NAME" property in the first ALTER DATABASE command.  When using the 2nd ALTER DATABASE command, the filename doesn't get single quotes, for some reason. 

     

    The documentation on MSDN says you have to use the "logical file name" for the SHRINKFILE command, but doesn't explain what they mean by "logical file name".  I tried 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\GFeeder.ndf', 'GFeeder.ndf', tempdb, and then finally 'tempdb' worked.  Then, with trying to use the same file name in the 2nd ALTER DATABASE statement, the quotes didn't work.  I wish Microsoft would learn some consistency with its syntax!  Anyway, it's done now.  Here's to hoping others won't have to go through what I did to figure all this out.

     

    -Tom

    Wednesday, November 12, 2008 8:56 PM
  • Hi, all --
    I realize this is an old thread, but I figured I'd post my resolution here, since it is also the first result on Google (when Google was searched by the error message).

    Long story short... for us, this was actually about
    RUNNING OUT OF DISC SPACE
    on the hard disc that tempdb resides on.

    Note that our problem (& resolution) happened with SQL Server 2005.  ((Aside:  I don't know why anyone suggested emptying log files by backing them up, since tempdb will not allow backups.))

    Our situation:
    We had a "SELECT" with a "JOIN" that multiplied an already large table.  When that select was modified, adding a "GROUP BY" to it, with "MAX"s on string functions, it ran with error messages: 

    Msg 1105, Level 17, State 2, Line 6
    Could not allocate space for object 'dbo.SORT temporary run storage:  140794359840768' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
    Msg 9002, Level 17, State 4, Line 6
    The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

    Right-clicking on tempdb and looking at properties, then under the "Files" tab:  autogrow was indeed enabled.
    And looking under the "options" tab:  the recovery mode was already "simple".

    Our most *critical* piece of information showed up until we happened to be looking at the server desktop, while the query re-ran.  The session's SysTray produced a balloon, saying that the C: was low on space:  the C: drive had gone from 29.6 Gig free, down to only about 800 Meg free!

    SHORT term steps:  
    1) After the problem happened the first time, we thought the Temporary Tables in tempdb (some of which were a week old) might be an issue.  After trying multiple approaches, it was *restarting the server* that got rid of those Temporary Tables.
    (I'm only reporting that server-restart for completeness:  in hindsight, I do not know if getting rid of those Temporary Tables was even necessary.)

    2) The 2nd time the problem happened (and we noticed the free space problem on C:), we returned the free space to 29.6 GB by simply restarting SQL Server (not the whole server).  To do that in Management Studio, we right-clicked on the servername in object explorer, and selected "Restart".

    3)  [Note:  sometimes, disc space can disappear while a query window's *results* are using a huge amount of disc space.  In that situation, disc space returns by simply closing that query window... so in that case, you would not even have to restart SQL Server.]

    MEDIUM term steps:
    4) Running the query on another server, where the tempdb resides on a disc with more disc space.

    5) Changing our query so that the huge JOIN goes into a table (using "SELECT... INTO..."), with the GROUP BY / MAX step later, in a separate SELECT.  It makes the query slower in our case, but it's liveable.

    LONG term steps
    (we have NOT done this yet):
    6) Getting a bigger hard disc, or
    getting a 2nd hard disc with enough space, and moving tempdb to that disc.  It looks pretty easy: 
    http://blog.sqlauthority.com/2007/04/01/sql-server-tempdb-is-full-move-tempdb-from-one-drive-to-another-drive/

    ... but as I said, we have not done this yet.  (Only one hard disc on that server, currently.)

    Good luck... HTH,
    ---Doug


    Doug Ivison

    Wednesday, September 04, 2013 5:31 PM