Monday, January 15, 2007 11:29 PMHello all,
I’m trying to transfer/transform some data from one table to another, both on the same database. The source table holds 92M records.
To do so I'm using the next statement:
INSERT INTO [dbo].[Messages1]
([Time] ,[ID] ,[ResponseTo] ,[MessageMD5] ,[source] ,[dest]
,[arln_err] ,[ErrorDescription] ,[ErrorNumber] ,[ErrorSource]
,[ErrorType] ,[HttpCall] ,[HttpStatus] ,[QuoteAmount]
,[ReservationPickupLocation] ,[RatePickupLocationDescription] ,[RateReqCarType] ,[RateReqPickupLocation]
,[RejectMessage] ,[ReservationAmount] ,[ReservationCarType] ,[ReservationCarTypeDescription]
,[RatePickupLocation] ,[resp1] ,[ResultNum] ,[strRejectMessage]
SELECT [Time] ,[ID] ,[ResponseTo] ,[MessageMD5] ,[source] ,[dest]
,[EE01] ,[EE02] ,[EE03] ,[EE04]
,[EE05] ,[EE06] ,[EE07] ,cast([EE08] as float)
,[EE09] ,[EE10] ,[EE11] ,[EE12]
,[EE13] ,cast ([EE14] as float) ,[EE15] ,[EE16]
,[EE17] ,[EE18] ,[EE19] ,[EE20]
And I’m getting next exception:
Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'dbo.SORT temporary run storage: 185394470715392' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
I’m having enough disk space, and the PRIMARY and log files are set to auto grow. I also try to add aditionary log files – with no success.
How should I perform this task? How do I stop the database logging for this action or even for the whole database?
Tuesday, January 16, 2007 12:21 AM
i will be both tempdb and your data/log file of the user database is growing, and then you run out of disk somewhere. Try rerunnig and monitoring the disks the db files reside on.
Another thing you can do is break up the insert into smaller-sized transactions, i.e. insert ranges, maybe one or two million at a time. That will be more performant and should be faster.
Tuesday, January 16, 2007 12:32 AM
It looks to me the problem is while creating temporary table space for the primary key sort.
The statement fail after running for 4 minutes, so the disk space is not running out (I have 52G of free disk). anyhow - how can I monitor table/log size?
About your second suggestion how can I take a million records in a time? is there a something like Oracle psaudo column ROWNUM?
Thanks in advance.
Saturday, January 20, 2007 5:44 AM
The error messages talked about tempdb space usage. It indicated that the tempdb data and log file run out of space.
If you are sure that you have enough space for tempdb data and log file, then the problem probably will be that you do not have autogrow or your tempdb files reached to its size limit.
Sunday, January 21, 2007 10:53 AM
I would BCP your data out to a file and then BCP the data back in to a table with no indexes.
Then create you indexes after.
Tuesday, October 28, 2008 2:35 PM
I'm having a similar problem, and the post marked as the 'answer' just re-states the error message, basically. How do you check to see if your database is set for autogrowth? How do you change it if it's not? How do you clear a full transaction log? And what the heck is "BCP"? I have a 3 million row table in my database and having issues running queries against it - never had to deal with this large of a database table before.
I don't mean to sound ungrateful, but why do people always wait for someone to ask these kinds of questions in response to an answer, instead of just saying how to do everything when they are answering the question the first time?
Tuesday, October 28, 2008 3:03 PM
Right click on the database in management studio and click properties.
In there on the files section you will see how the files are configure.
To clear a full transaction log back it up. Right click on the database, select tasks and then backup. In the wizard make sure you select transaction log as the backup type. Add a new destination (click on the Add button) once you have a file configured you can click OK.
If you don't want to have point in time recovery change your recovery model (in database properties) to simple. That way the transaction log will be cleared of records relating to closed transactions when a checkpoint occurs. (this is done automatically by the server)
BCP (bulk copy) is a program for exporting data
Tuesday, October 28, 2008 3:04 PM
I found a solution on another site:
From that, I used these SQL commands:
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdb', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\MyTempFile.ndf' , SIZE = 102400KB , FILEGROWTH = 10240KB )
And then I could run queries.
But so how do you control the growth, as in, can you delete the file and SQL will auto-re-create it? Or is it one of those deals where if you delete it and SQL looks for it, SQL will blow up or have issues because it's not there anymore? Is there a way to manage this file (i.e. delete & re-create it after it reaches a certain limit) programmatically, within SQL? And I still don't know what BCP is...
Tuesday, October 28, 2008 3:08 PM
All this does is add a new file to your database. A bit like adding more pages to a book. SQL is storing your data in these files so you can't just delete it otherwise you will loose your data.
SQL manages space in these files, so if you delete space it will be reused.
Tuesday, October 28, 2008 3:19 PM
Thank you for your quick responses.
Thing is, there shouldn't be more data to store for the database table I'm using.
I'm implementing a circular queue of 3 million rows. The table is created and rows filled by a C# .NET console app. As the program checks for new data to put into the table, the rows are updated based on if they are marked "Free". If no rows are "Free" anymore, it will check for the oldest record and update on top of that one.
If it's just query logging that is happening, and that is what this extra file is for and doing, I don't really care about logging that and I don't want to have to add more and more files to my database to accomodate that. My SQL instance is currently installed on a 20 GB virtual disk - they might let it be grown up to 30, but anything more and they might start asking us why we're taking up so much space on the SAN. Is there a solution to that, to prevent SQL from eating up disk space with log data, if that's what this is, as I believe?
Tuesday, October 28, 2008 3:24 PMThe transaction log holds the changes you make to your data. If it takes up 10Gb then you have modified that amount of data minus overhead for the information required by the transaction.
Tuesday, October 28, 2008 3:34 PM
Ok, I think what you've been saying in the past couple of posts just clicked...
If I want to manage this log, I have to set it as restricted file growth (and set a value) in the Database Properties > Files section (and click the ellipsis by the description), and then, under the "Transaction Log Shipping" section, create a backup schedule that will backup (and clear the current) log. But then that backup location has to be managed. Can those backups be safely deleted after the log has been backed up (and therefore cleared)? Basically my program can rebuild the database and data in 2 hours programmatically, and will have to do so, anyway, on any kind of failure, so I don't care about backing up the data.
Tuesday, October 28, 2008 3:42 PM
You should be able to restore from backup in under 2 hours. However if you are happy with using your tool to restore the database and as you don't know about database backups then you should set your recovery mode to simple.
The simple view of how a SQL database works is this.
Your data is stored in tables, which are stored as pages in a file. If you make changes to a row, the information about that change is stored in the transaction log. SQL will apply the changes from the transaction log to the data files during a process called a checkpoint. Once the changes are finally written to disk the changes will stay in the transaction log until you back it up or you have simple recovery model set.
So why back it up. If you want to recover to 11 o'clock today then you need to restore the database from the last back and then reapply the transactions from teh transaction log to get to that time. Thats why the transactions stay in the transaction log until it is backed up.
If you don't need point in time recovery then setting simple recovery model says to SQL you don't want to be able to restore the transactions and therefore once the pages are written to the data files the transactions are removed from the transaction log.
Tuesday, October 28, 2008 3:55 PM
Thank you very much in helping me understand all this...
You are right that I then won't need transaction log shipping/backup. My program has to both update the SQL database table and send files to another server per update, so if there's a catastrophic failure, it's easier to just re-initialize my program to re-create the table and send the latest file updates to the other server also, than any kind of SQL recovery that won't help get the other server in sync with it. The reason it will take the 2 hours is it will drop & re-create the 3 million rows, and then find metadata from XML files housed in SharePoint, based on a URL date structure (it loops through the dates from today back to the first date where the XML file is there), and send that XML metadata to the other server. SQL is used to indicate an XML file that was found and recorded.
So how do I set the recovery model to "simple"?
Tuesday, October 28, 2008 3:59 PM
Most of this information you should be able to find in Books Online.
So to answer this I suggest you have a look for "simple recovery model" in Books on Line. There is more information and details on how to configure it.
Tuesday, October 28, 2008 4:05 PM
ALTERDATABASE Northwind SET RECOVERY SIMPLE
Thanks for the help.
Tuesday, October 28, 2008 4:09 PMglad you found the answer
Wednesday, November 12, 2008 8:56 PM
Also, from my example where I added a transaction log file using this statement -
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdb', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\GFeeder.ndf' , SIZE = 102400KB , FILEGROWTH = 10240KB )
I only just discovered the syntax I needed to remove that file:
DBCC SHRINKFILE ( 'tempdb', EMPTYFILE )
ALTER DATABASE [tempdb] REMOVE FILE tempdb
The documentation on MSDN says you have to use the "logical file name" for the SHRINKFILE command, but doesn't explain what they mean by "logical file name". I tried 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\GFeeder.ndf', 'GFeeder.ndf', tempdb, and then finally 'tempdb' worked. Then, with trying to use the same file name in the 2nd ALTER DATABASE statement, the quotes didn't work. I wish Microsoft would learn some consistency with its syntax! Anyway, it's done now. Here's to hoping others won't have to go through what I did to figure all this out.