What is taking so long? RRS feed

  • Question

  • Hello.

    I have a query that takes 1,5second to execute, but only 150ms of CPU. The query is quite simple, just one where statement against a clustered index.

    SQL Server Execution Times:
       CPU time = 156 ms,  elapsed time = 1595 ms.

    Code Snippet

    SELECT column1, column3, column4, ..., column10 FROM table WHERE column2 IN (37, 41, 43, 45, 49, 53, 55) ORDER BY column3 DESC

    Code Snippet

    |--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
         |--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=37 OR [u].[searchparamid]=41 OR [u].[searchparamid]=43 OR [u].[searchparamid]=45 OR [u].[searchparamid]=49 OR [u].[searchparamid]=53 OR [u].[searchparamid]=55 OR [u].[searchparamid]=59) ORDERED FORWARD)

    I have tried to rewrite the query to an INNER JOIN instead.

    Code Snippet

    |--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
         |--Nested Loops(Inner Join, OUTER REFERENCES:([spal].[number]))
              |--Index Seek(OBJECT:([MP].[dbo].[__search_parameters_lookup].[IX___search_parameters_lookup] AS [spal]), SEEK:([spal].[hash]=-1726604993) ORDERED FORWARD)
              |--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=[spal].[number]) ORDERED FORWARD)

    but the query still takes 1,5 seconds.

    It spends 59% (according to execution plan) of sorting. 14% for the index seek of the __search_parameters_lookup table and then 24% of a clustered index seek of the __searchtest table.

    How come it only uses that small of CPU but it still takes 1,5 seconds? It seems to be reading from memory as well so it shouldnt be an IO-problem?

    The index I have on the table is a clustered index on (column 2).
    Any ideas of how I can improve this? I have tried with DTA, also with a non clustered index on column3.

    If I remove some columns from the SELECT-list the query will execute alot faster:

    SQL Server Execution Times:
       CPU time = 32 ms,  elapsed time = 32 ms.

    Booth the CPU and the elapsed time goes down and now appears to be more normal.

    So there seems to be a problem caused by data transfer.
    I tried to do a remake and normalize the table and when I do that I get the query execute with a speed of 400ms CPU and 400ms total. And this is still the exact same result, so why does it only spend 400ms of "rendering" or fetching the data when the tables are normalized but 1500ms when its denormalized?

    Any ideas?

    I am running Microsoft SQL Server  2000 - 8.00.2039
    Monday, January 28, 2008 10:12 AM

All replies

  • Any input?
    Wednesday, February 13, 2008 10:18 PM
  • Think of it this way.


    When your SELECT list is limited to only the field used by all indexes, the results can be fetched directly from the index reads. When you expand the SELECT list to include non-indexed columns, the results must be fetched from the table and combined with the selected columns obtained from the indexes.


    Wednesday, February 13, 2008 11:05 PM
  • As Arnie said, when you select a column that is not part of an index, a lookup operation occurs and that might add overhead to the query. If you are desperate on performance, try adding all the columns that you need as 'included columns' to your index. This will avoid the lookup. Refer this page to learn how to create indexes with  'included columns' : http://www.mssqltips.com/tip.asp?tip=1078

    Also try running your query with SET STATISTICS IO ON to see how much read/io operations are taking place while running your query. And Is this table highly transactional? If yes, try controlling the locks or changing the transaction level to SNAPSHOT (if you are on sqlserver 2005)

    Thursday, February 14, 2008 3:59 AM
  • The query can most likely be sped up by removing the ORDER BY.  As your clustered index is based on Column2, TempDB must be used to sort them into Column3 order.  You're likely seeing the 1.5 seconds from having to write the rows into TempDB to perform the sort.  Having unsorted rows isn't likely an option, but it should explain where most of your duration is going -- waiting on the disk.


    By putting an index on Column3, you then already have your rows ordered correctly, removing the need for TempDB to perform the sort.  However, then you need to scan every row in the index to find those that match your condition for Column2.  Once that's done, you will need to do the bookmark lookup to find the other columns that you need, unless you use a covering index as described by Arnie and Jacob.  Depending on how many rows are in the index will control whether this is a faster option than sorting is a better option.  From SQL Server's point of view, using the clustered index on Column2 is quicker as it then doesn't have to do a bookmark lookup to get the other columns - it just has to perform the sort.

    Thursday, February 14, 2008 4:23 AM
  • The problem with a covering index is that I have to cover 25 columns. SQL-Server 2000 can only handle 16 as I understand it.

    I understand that SQL-Server 2005 has a feature to include columns called nonkey columns that works almost like covering indexes and supports up to 1023 columns.

    But until we upgrade, what other choices may I have?
    Thursday, February 14, 2008 8:26 AM
  • You could try creating two indexes 13 columns each, and see if it can merge the data as it pulls from them.  Yes I said 13 each since you have the primary key column in both indexes to match on.  This may not do anything at all for you, it is something I thought about and would consider trying in my environment if I was having this kind of a problem.


    Thursday, February 14, 2008 1:30 PM
  • Another option is to move the clustered index to Column3, so you'd eliminate the need for the sort and the bookmark lookup.  However, you'd then be scanning the entire table to find the matches for Column2, which could be worse performance-wise, depending on the number of rows.  It would also have an impact on everything else this table is used for, and SQL Server will also need to an a uniqueifier column to the table if Column3 is not unique. 


    Lots of testing would be required, so it's not the best option, so it may well be easier to upgrade to 2005.

    Thursday, February 14, 2008 10:30 PM