none
select count(*) takes over 4 minutes on a table? RRS feed

  • Question

  • When I do a SELECT COUNT(*) FROM MyTable it takes 4 minutes 15 seconds.

    I also noticed that DELETE also takes a long time to clear down records.

    The table has no Primary Key or Index.

    There are around 100,000 records in the table.

    Will a Primary Key / Index improve the response time of my SELECT COUNT(*) and DELETE statements?


    CG

    Tuesday, September 2, 2014 11:47 AM

Answers

  • If you are using a heap (which is a table without a clustered index), and you delete rows without using a table lock hint, empty table spaces may not be released.

    When you submit a query like "SELECT COUNT(*) FROM MyTable", the entire table is scanned, including every empty page.

    You should check the space that the table actually consumes, using sp_spaceused <tablename>. If the amount of pages is excessive compared to the number of rows, you should consider (temporarily) adding a clustered index. When you do, all avoidable free space will be removed.

    Now, to answer your question: if your table is very wide (requires many bytes per row), then creating a very narrow nonclustered index will definitely speed up the "SELECT COUNT(*)" query. At the same take, it will have a negative effect on the DELETE performance.

    Of course you can always test this. If the added index hurts performance, then you just delete it.


    Gert-Jan

    Tuesday, September 2, 2014 7:11 PM
  • TRUNCATE TABLE Tbale1 is much much faster than DELETE....

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Tuesday, September 2, 2014 1:10 PM
    Answerer
  • If you want to get rid of all the rows from the tables you should use TRUNCATE TABLE

    TRUNCATE TABLE Table1
    GO

    TRUNCATE TABLE Table2
    GO

    TRUNCATE TABLE Table3
    GO


    Javier Villegas | @javier_villhttp://sql-javier-villegas.blogspot.com/

    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

    Tuesday, September 2, 2014 1:47 PM

All replies

  • YES, you may try to introduce an index (appropriate index, without knowing your table and its usage, its hard to suggest an index) and see the difference. If you have a primary key condidate, create a primary key.
    Tuesday, September 2, 2014 11:54 AM
  • How is the load on the server? Any lock on the table? Still it should not take that long for a 100k table. 
    • Edited by Soumen Barua Tuesday, September 2, 2014 12:01 PM
    Tuesday, September 2, 2014 11:57 AM
  • Will a Primary Key / Index improve the response time of my SELECT COUNT(*) and DELETE statements?

    Over 4 minutes seems excessive for a scan of 100,000 rows so I think there may be something else going on, such as excessive blocking.  Without any useful indexes, blocking will occur if you have concurrent deletes and select count queries.

    I assume there is a WHERE clause in the delete statement?  Ideally, create a clustered index on that column(s) to optimize the delete.  The SELECT COUNT(*) query will require a full scan but I suspect performance will be much better once the clustered index is added to optimize the delete.

     


    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    Tuesday, September 2, 2014 12:21 PM
  • Try instead getting the count

    SELECT
    t.name,
    [RowCount] = SUM
    (
    CASE
    WHEN (p.index_id < 2) AND (a.type = 1) THEN p.rows
    ELSE 0
    END
    )
    FROM
    sys.tables t
    INNER JOIN sys.partitions p
    ON t.object_id = p.object_id
    INNER JOIN sys.allocation_units a
    ON p.partition_id = a.container_id
    GROUP BY
    t.name
    HAVING name ='tblname'


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Tuesday, September 2, 2014 12:27 PM
    Answerer
  • I am using SSIS and my first task (SQL Task) does the deletes and they are same as below.

    DELETE FROM Table1
    GO

    DELETE FROM Table2
    GO

    DELETE FROM Table3
    GO

    Running these manually in SQL Server Mgt Studio takes forever also (after uploading 10,000 records to the tables - smaller amounts like 100 records causes no issues?)


    CG

    Tuesday, September 2, 2014 12:59 PM
  • TRUNCATE TABLE Tbale1 is much much faster than DELETE....

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Tuesday, September 2, 2014 1:10 PM
    Answerer
  • If you want to get rid of all the rows from the tables you should use TRUNCATE TABLE

    TRUNCATE TABLE Table1
    GO

    TRUNCATE TABLE Table2
    GO

    TRUNCATE TABLE Table3
    GO


    Javier Villegas | @javier_villhttp://sql-javier-villegas.blogspot.com/

    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

    Tuesday, September 2, 2014 1:47 PM
  • Truncate is faster. However, with only 100,000 records in play, it shouldn't be much faster.

    If it is, you may find you're suffering from a configuration problem.

    If you log files are stored on the same physical spindles as your data files, you may be subject to IO blocking. SQL Server is trying to write both Data, and log files at the same time, to the same place and is tripping itself up.

    A good drive configuration is as follows:

    C:\ Operating system and program files.
    L:\ Log files
    M:\ misc. files
    S:\ Data files
    T:\ TempDB and its logs (unless you can spare the overhead to have the tempDB logs stored on their own spindle).

     

    Tuesday, September 2, 2014 3:31 PM
  • If you are using a heap (which is a table without a clustered index), and you delete rows without using a table lock hint, empty table spaces may not be released.

    When you submit a query like "SELECT COUNT(*) FROM MyTable", the entire table is scanned, including every empty page.

    You should check the space that the table actually consumes, using sp_spaceused <tablename>. If the amount of pages is excessive compared to the number of rows, you should consider (temporarily) adding a clustered index. When you do, all avoidable free space will be removed.

    Now, to answer your question: if your table is very wide (requires many bytes per row), then creating a very narrow nonclustered index will definitely speed up the "SELECT COUNT(*)" query. At the same take, it will have a negative effect on the DELETE performance.

    Of course you can always test this. If the added index hurts performance, then you just delete it.


    Gert-Jan

    Tuesday, September 2, 2014 7:11 PM