none
MS KB regarding table variable & temp tables... any idea?

    Question

  • Both temp-table & table-variables are cerated & stored in tempdb. What does the last line (underlined) below means.

     

    Q4: Are table variables memory-only structures that are assured better performance as compared to temporary or permanent tables, because they are maintained in a database that resides on the physical disk?

    A4: A table variable is not a memory-only structure. Because a table variable might hold more data than can fit in memory, it has to have a place on disk to store data. Table variables are created in the tempdb database similar to temporary tables. If memory is available, both table variables and temporary tables are created and processed while in memory (data cache).

    I've extracted this from MS KB: http://support.microsoft.com/kb/305977/en-us

     

    My system has lots of memory, but when I create both tables they are created in tempdb. I'm using the SQL script below:

     

     

     

    select * from tempdb.INFORMATION_SCHEMA.TABLES
    -- No records
    GO
    
    declare @tabvar table(sn int primary key)
    
    select * from tempdb.INFORMATION_SCHEMA.TABLES
    -- TABLE_CATALOG	TABLE_SCHEMA	TABLE_NAME	TABLE_TYPE
    -- tempdb		dbo		#31EC6D26	BASE TABLE
    
    select name,* from tempdb.sys.sysindexes
    where name like 'PK%'
    -- name				id		status
    -- PK__#31EC6D26__32E0915F	837578022	2066
    GO
    
    select * from tempdb.INFORMATION_SCHEMA.TABLES
    -- No records
    GO
    
    declare @tabvar1 table(sn int)
    declare @tabvar2 table(sn int)
    
    select * from tempdb.INFORMATION_SCHEMA.TABLES
    -- TABLE_CATALOG	TABLE_SCHEMA	TABLE_NAME	TABLE_TYPE
    -- tempdb		dbo		#33D4B598	BASE TABLE
    -- tempdb		dbo		#34C8D9D1	BASE TABLE
    GO
    


     

    Please explain under what circumstances they are created in memory?

     


    ~Manu
    http://sqlwithmanoj.wordpress.com
    Wednesday, December 29, 2010 2:52 PM

Answers

  • Basically, whether the data in table variables and temp tables is in memory or not is treated the same as any regular table.  SQL Server tries to keep recently used rows in memory.  When you create and insert rows into a table variable or temp table, they have, of course, been recently used, so are probably in memory.  And since they don't, in most cases, live for very long the data is likely to stay in memory until the table variable goes out of scope or the temp table is (explicitly or implicitly) dropped.  But if they have very large amounts of data, or live for a long time or your server is under heavy memory pressure, the data can be removed from memory just like the data in any regular table.

    The point of the KB article you reference is that there are a number of reasons for selecting a table variable vs a temp table, but whether or not the data is kept in memory is not a consideration since both are treated the same when it comes to whether or not the data is in memory.

    Tom

    Wednesday, December 29, 2010 3:05 PM
  • This is a poorly worded explaination. 

    It means if there is cache memory available, they are cached in the data cache, in addition to being written to tempdb.

    Thursday, December 30, 2010 5:39 PM
    Moderator
  • What that article is saying is that the data in table variables and the data in temp tables is treated the same.  It might all stay in memory and some or all of it might be written to disk in the tempdb.  It does not mean that you will not have entries made in the system tables for tempdb for those objects.  That will always happen for all table variables and all temp tables.

    Tom

    Thursday, December 30, 2010 5:39 PM

All replies

  • Basically, whether the data in table variables and temp tables is in memory or not is treated the same as any regular table.  SQL Server tries to keep recently used rows in memory.  When you create and insert rows into a table variable or temp table, they have, of course, been recently used, so are probably in memory.  And since they don't, in most cases, live for very long the data is likely to stay in memory until the table variable goes out of scope or the temp table is (explicitly or implicitly) dropped.  But if they have very large amounts of data, or live for a long time or your server is under heavy memory pressure, the data can be removed from memory just like the data in any regular table.

    The point of the KB article you reference is that there are a number of reasons for selecting a table variable vs a temp table, but whether or not the data is kept in memory is not a consideration since both are treated the same when it comes to whether or not the data is in memory.

    Tom

    Wednesday, December 29, 2010 3:05 PM
  • In additon , I think since SQL Server 2005 MS has introduced temporary table chaching mechanism, means a table is not droped but truncate (cached).The main goal of temp table cache is to reduce the costs associated with temp table creation. The second temp table creation is much faster. Instead of dropping and creating the table it is just trunacted.

    --The following query will tell you how many temp tables are cached:

    select count(*) from sys.dm_os_memory_cache_entries where
    type='CACHESTORE_TEMPTABLES'

    Some conditions

    CREATE TABLE #temp (a int NOT NULL unique)  --this table will cached

    CREATE TABLE #temp (a int NOT NULL)
     CREATE UNIQUE INDEX idx on #temp (a)  --this does not


    Best Regards, Uri Dimant SQL Server MVP http://dimantdatabasesolutions.blogspot.com/ http://sqlblog.com/blogs/uri_dimant/
    Wednesday, December 29, 2010 3:30 PM
    Answerer
  • @Tom, thanks for your response.

     

    Yes, normal table's, temp-table's & table-variable's recent data is stored in data cache for frequent use & is a concept of paging in OS.

    The article does not discuss about keeping data in memory, processing is fine, but creating tables in memory and store there.

    Hence the query: If memory is available, both table variables and temporary tables are created and processed while in memory (data cache).

    Any reason for that?


    ~Manu
    http://sqlwithmanoj.wordpress.com
    Thursday, December 30, 2010 5:24 AM
  • This is a poorly worded explaination. 

    It means if there is cache memory available, they are cached in the data cache, in addition to being written to tempdb.

    Thursday, December 30, 2010 5:39 PM
    Moderator
  • What that article is saying is that the data in table variables and the data in temp tables is treated the same.  It might all stay in memory and some or all of it might be written to disk in the tempdb.  It does not mean that you will not have entries made in the system tables for tempdb for those objects.  That will always happen for all table variables and all temp tables.

    Tom

    Thursday, December 30, 2010 5:39 PM
  • Is it valid to say that operations performed on Temp tables are disk related? Say for eg. a temp table has 10 million rows and there is some processing going on the records inside table. In this case, Per above discussions, is it wise to say the operations are hitting disk not memory.

    thanks,

    Monday, April 25, 2011 7:16 PM
  • Basically, no.  Although in one sense yes.  The data in both temp tables and table variables is originally in memory.  And it will stay there unless SQL Server needs the memory for other uses.  In which case the data will be written to tempdb.  Both table variables and temp tables are treated the same for this purpose.  This difference is that all changes (inserts, updates, deletes) to temp tables are also written to the tempdb log file.  But changes to table variables are not written to the log file.  (A side effect of this is that if you do a rollback , changes to temp tables done in the transaction will be rolled back, but not changes to table variables.)

    Tom

    Monday, April 25, 2011 7:34 PM
  • Basically, no.  Although in one sense yes.  The data in both temp tables and table variables is originally in memory.  And it will stay there unless SQL Server needs the memory for other uses.  In which case the data will be written to tempdb.  Both table variables and temp tables are treated the same for this purpose.  This difference is that all changes (inserts, updates, deletes) to temp tables are also written to the tempdb log file.  But changes to table variables are not written to the log file.  (A side effect of this is that if you do a rollback , changes to temp tables done in the transaction will be rolled back, but not changes to table variables.)

    Tom

    I have conducted an experiment to see if table variables uses tempdb’s log.
    On a SQL Server instance that I am the sole user of it, I have first restarted SQL Server, and I have checked the tempdb log which was at that time 512 kb.
    Then I have executed the following query against one of the databases on the instance:

    DECLARE @i int
    DECLARE @a table (i int)
    SET @i = 0
    WHILE @i < 100000
    BEGIN
    INSERT INTO @a values(@i)
    SET @i = @i + 1
    END

    After which I have checked again the log file size and it was now 1536 kb.
    So this shows us, that although table variables are not being rolled back, and although the log of tempdb is not used to redo anything anyway (since at startup tempdb is recreated), still any write to table variables is being logged to tempdb's log.
    And this is something that deems to be unnecessary, especially due to the fact that SQL Server has to for writes to the log, and if SQL Server would not write table variables to the log then there would probably be a performance gain.
    As such I would say that for an OLAP server, one should rather have an extra spindle for the tempdb log rather than for the database log, since for reads the tempdb log performance is more of a concern.


    • Edited by yoelhalb Monday, November 14, 2011 8:24 PM
    Monday, November 14, 2011 8:23 PM
  • It looks like you're correct.  I also don't know why they do the writes for the table variables, but they seem to.  To check this, I ran a little different test that you did, loading my temp table/variable with a set based insert, having my temp table/variable have a primary key, and more/larger columns, doing the temp table in a transaction, so that I knew the temp table had to save items in the log, and using sys.dm_io_virtual_file_stats to get the number of writes and number of bytes written.  The results varied, but the results were much the same for the table variable and temp table (sometimes one was faster or showed fewer writes, sometimes another (even if your process is the only user process running, there are always systems processes and they use tempdb also)- but the overall typical result seemed to be the same.  The code I tested with is below

    Use tempdb
    -- Create stats table
    Select Cast(Null As int) As Entry, *
    Into #FStats
    From sys.dm_io_virtual_file_stats(DB_ID('TempDB'), 2)
    Where 1=0
    go
    Checkpoint
    -- Save stats before table variable
    Insert #FStats
    Select 1, * 
    From sys.dm_io_virtual_file_stats(DB_ID('TempDB'), 2)
    go
    DECLARE @a table (i int identity Primary Key, j int, k varchar(25), m decimal(10,2), n varchar(1000))
    INSERT INTO @a (j, k, m, n) 
    Select colid, 'This value is ' + Cast(colid As varchar(11)), colid/100., Replicate('A', 1000)
    From master.sys.syscolumns Cross Join master.sys.tables
    go
    Checkpoint
    -- Save stats after table variable and before temp table
    Insert #FStats
    Select 2, * 
    From sys.dm_io_virtual_file_stats(DB_ID('TempDB'), 2)
    go
    CREATE TABLE #a  (i int identity Primary Key, j int, k varchar(25), m decimal(10,2), n varchar(1000))
    BEGIN TRANSACTION
    INSERT INTO #a (j, k, m, n)
    Select colid, 'This value is ' + Cast(colid As varchar(11)), colid/100., Replicate('A', 1000)
    From master.sys.syscolumns Cross Join master.sys.tables
    COMMIT
    Drop Table #a
    go
    -- Save stats after temp table
    Checkpoint
    Insert #FStats
    Select 3, * 
    From sys.dm_io_virtual_file_stats(DB_ID('TempDB'), 2)
    go
    Select Case When f1.Entry = 2 Then 'Table Variable' Else 'Temp Table' End As Run,
        f1.sample_ms - f2.sample_ms As ms, 
        f1.num_of_reads - f2.num_of_reads As num_of_reads, 
        f1.num_of_bytes_read - f2.num_of_bytes_read As num_of_bytes_read, 
        f1.io_stall_read_ms - f2.io_stall_read_ms As io_stall_read_ms, 
        f1.num_of_writes - f2.num_of_writes As num_of_writes, 
        f1.num_of_bytes_written - f2.num_of_bytes_written As num_of_bytes_written, 
        f1.io_stall_write_ms - f2.io_stall_write_ms As io_stall_write_ms, 
        f1.io_stall - f2.io_stall As io_stall 
    From #FStats f1
    Inner Join #FStats f2 On f1.Entry = f2.Entry + 1
    Order By Run;
    go
    Drop Table #FStats
    

    Tom
    Monday, November 14, 2011 10:26 PM