none
SQL Server Performance comparison

    Question

  • Hello,

    I have run two statements the original and modified version and collected IO stats. Below are the details,

    Now how can we judge which was performing well based on these values ?

    Total records on this table: 2265148

    Original Run: 

    Table 'P_Ver'. Scan count 52, logical reads 620, physical reads 0, read-ahead reads 3, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Updated Run:

    Table 'P_Ver'. Scan count 9, logical reads 61061, physical reads 1002, read-ahead reads 60052, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Appreciate any suggestions with such kind of analysis.

    Thank you



    • Edited by dvsrk Thursday, July 11, 2013 12:28 AM
    Thursday, July 11, 2013 12:26 AM

Answers

  • It is not entirely clear how you tested this (how many times you ran these queries, whether you cleared the cache between runs, etc.).

    Having said that, the first shows not physical read (either physical reads or read ahead reads) while the second one shows large numbers of physical reads. If the two queries are run under similar circumstances, then that single fact is proof that the first query is much better.

    But the most important performance indicator is missing. And that is the actual elapsed time. In the end that is the criterion number one. "the one that runs fastest" and Dan noted so eloquently. A simple way to measure it is like this:

    -- clear the cache if you want to test with a cold cache
    
    DECLARE @start datetime
    SET @start=CURRENT_TIMESTAMP
    
    -- execute the test query
    
    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP) AS elapsed_time_in_milliseconds
    


    Gert-Jan

    Thursday, July 11, 2013 6:58 PM

All replies

  • Hello,

    A cold cache wasn't used or you wouldn't have all logical reads, but with this all you can say is that the first run was much more IO efficient than the second updated run. This is evident by the amount of reads that has occurred...

    I'm not sure what else you're wanting to get form this?

    -Sean


    Sean Gallardy | Blog | Twitter

    Thursday, July 11, 2013 1:02 AM
  • Hello,

    I have run two statements the original and modified version and collected IO stats. Below are the details,

    Now how can we judge which was performing well based on these values ?

    Total records on this table: 2265148

    Original Run: 

    Table 'P_Ver'. Scan count 52, logical reads 620, physical reads 0, read-ahead reads 3, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Updated Run:

    Table 'P_Ver'. Scan count 9, logical reads 61061, physical reads 1002, read-ahead reads 60052, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Appreciate any suggestions with such kind of analysis.

    Thank you




    The original one seems better. 

    Also to Sean mentioned clearing cache using a "Cold cache"  - DON NOT DO THIS IN PRODUCTION.
    In test, this will flush the buffer cache so you will get accurate information;

    Checkpoint
    go
    DBCC DROPCLEANBUFFERS
    go

    If you have not already read this, this will give you more information about the different results:

    http://msdn.microsoft.com/en-us/library/ms184361.aspx

    If you clear the buffer cache, find out #2 query still has larger numbers, that change in #2, is using a lot more I/O.  (physical or logical)   You should go with query #1 result which had less reads/read-aheads.

    -Norm

    If this was helpful please vote for it or if it answers your question please mark as answered



    Thursday, July 11, 2013 2:04 AM
  • Now how can we judge which was performing well based on these values ?

    The one that ran the fastest :-)

    The number of logical reads is an important performance factor and the first query is clearly much better in this regard.  Running the each query against a cold cache will likely magnify this I/O disparity and wall clock times.


    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    Thursday, July 11, 2013 3:51 AM
  • Thank you so much for your responses. 

    If we consider scan count from the above details #1 has 52 where as #2 has 9. So, if we take table record count of 2.2M and the scan count into consideration which would be the better option ?

    My confusion is a table with 2.2M records where scan count 9 is better or a scan count 52 is better ? Please clarify.

    Thanks

    Thursday, July 11, 2013 5:24 PM
  • Hi,

    Unless (for some reason) you want your database to react slowly to users requests, lower scan and read counts are better.


    Sebastian Sajaroff Senior DBA Pharmacies Jean Coutu

    Thursday, July 11, 2013 5:30 PM
  • It is not entirely clear how you tested this (how many times you ran these queries, whether you cleared the cache between runs, etc.).

    Having said that, the first shows not physical read (either physical reads or read ahead reads) while the second one shows large numbers of physical reads. If the two queries are run under similar circumstances, then that single fact is proof that the first query is much better.

    But the most important performance indicator is missing. And that is the actual elapsed time. In the end that is the criterion number one. "the one that runs fastest" and Dan noted so eloquently. A simple way to measure it is like this:

    -- clear the cache if you want to test with a cold cache
    
    DECLARE @start datetime
    SET @start=CURRENT_TIMESTAMP
    
    -- execute the test query
    
    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP) AS elapsed_time_in_milliseconds
    


    Gert-Jan

    Thursday, July 11, 2013 6:58 PM