locked
Performance testing RRS feed

  • Question

  • I compressed a database now I want to test the performance increase. I can't run a trace on production. Is there somewhere I can find some queries that I can set statistics IO on and measure the difference that way?

    Alan

    Saturday, June 8, 2013 3:34 PM

Answers

  • In order to validate that compression either improves performance for your workload or does not negatively impact it, you need to record the actual workload from production. This is the most comprehensive method.

    I'd be interested to know what the obstacles are that are preventing this for you?

    An alternative to using SQL Server Profiler is the SQL Server Distributed Replay Utility.

    Alternatively, perhaps you are sufficiently familiar with your workload to know which are your most important queries and stored procedures. You could then focus your testing on validating the performance of these on the compressed version of the dabase.


    John Sansom| @SQLBrit


    • Edited by John Sansom Saturday, June 8, 2013 4:22 PM
    • Proposed as answer by Dean Savović Monday, June 10, 2013 8:04 AM
    • Marked as answer by Fanny Liu Friday, June 14, 2013 9:09 AM
    Saturday, June 8, 2013 3:40 PM
  • Running a trace on production is prohibited. I am a new dba and not yet familiar with the system. Is there anyway to get the most executed sql and SP's?

    One method is a query of sys.dm_exec_query_stats, which contains aggregated stats.

    SELECT
    	st.text
    	,SUBSTRING(st.text, (qs.statement_start_offset/2)+1, 
    		((CASE qs.statement_end_offset
    			WHEN -1 THEN DATALENGTH(st.text)
    			ELSE qs.statement_end_offset
    		END - qs.statement_start_offset)/2) + 1) AS statement_text
    	,qp.query_plan
    	,qs.*
    FROM sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) as st
    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
    ORDER BY 
    	qs.execution_count DESC;



    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    • Proposed as answer by Dean Savović Monday, June 10, 2013 8:05 AM
    • Marked as answer by Fanny Liu Friday, June 14, 2013 9:09 AM
    Saturday, June 8, 2013 4:10 PM
  • Here is a query from Glen Berry that will give you top IO statements you can then test with compression and without:

    SELECT
    TOP(50) OBJECT_NAME(qt.objectid) AS [SP Name]
    , (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO]
    , SUBSTRING(qt.[text],qs.statement_start_offset/2
    ,	(CASE 
    		WHEN qs.statement_end_offset = -1 
    	 THEN LEN(CONVERT(nvarchar(max), qt.[text])) * 2 
    		ELSE qs.statement_end_offset 
    	 END - qs.statement_start_offset)/2) AS [Query Text]	
    FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
    WHERE qt.[dbid] = DB_ID()
    ORDER BY [Avg IO] DESC OPTION (RECOMPILE);

    SET STATISTICS IO ON should give you lower values with compression than without if your data is good for compression. This does not mean that your queries will be faster because data has to be decompressed before sent to client and decompression utilizes mostly CPU.


    Regards, Dean Savović

    • Marked as answer by anaylor01 Friday, June 14, 2013 2:56 PM
    Monday, June 10, 2013 8:11 AM

All replies

  • In order to validate that compression either improves performance for your workload or does not negatively impact it, you need to record the actual workload from production. This is the most comprehensive method.

    I'd be interested to know what the obstacles are that are preventing this for you?

    An alternative to using SQL Server Profiler is the SQL Server Distributed Replay Utility.

    Alternatively, perhaps you are sufficiently familiar with your workload to know which are your most important queries and stored procedures. You could then focus your testing on validating the performance of these on the compressed version of the dabase.


    John Sansom| @SQLBrit


    • Edited by John Sansom Saturday, June 8, 2013 4:22 PM
    • Proposed as answer by Dean Savović Monday, June 10, 2013 8:04 AM
    • Marked as answer by Fanny Liu Friday, June 14, 2013 9:09 AM
    Saturday, June 8, 2013 3:40 PM
  • Running a trace on production is prohibited. I am a new dba and not yet familiar with the system. Is there anyway to get the most executed sql and SP's? 

    Alan

    Saturday, June 8, 2013 3:49 PM
  • I suspect that running a "blanket" trace on production is prohibited but were to work with your business teams to create a filtered server-side trace, you could quite likely come to an arrangement.

    There are many different ways to review the performance of queries on a SQL Server instance. For example:

    Using SQL Server DMVs


    John Sansom| @SQLBrit

    Saturday, June 8, 2013 3:55 PM
  • Running a trace on production is prohibited. I am a new dba and not yet familiar with the system. Is there anyway to get the most executed sql and SP's?

    One method is a query of sys.dm_exec_query_stats, which contains aggregated stats.

    SELECT
    	st.text
    	,SUBSTRING(st.text, (qs.statement_start_offset/2)+1, 
    		((CASE qs.statement_end_offset
    			WHEN -1 THEN DATALENGTH(st.text)
    			ELSE qs.statement_end_offset
    		END - qs.statement_start_offset)/2) + 1) AS statement_text
    	,qp.query_plan
    	,qs.*
    FROM sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) as st
    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
    ORDER BY 
    	qs.execution_count DESC;



    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    • Proposed as answer by Dean Savović Monday, June 10, 2013 8:05 AM
    • Marked as answer by Fanny Liu Friday, June 14, 2013 9:09 AM
    Saturday, June 8, 2013 4:10 PM
  • >Is there anyway to get the most executed sql and SP's? 

    The SSMS Built-In reports "Top Queries by Total CPU Time" and "Top Queries by Total IO" (right-click on your instance in SSMS, select Reports.)

     Or go straight to the DMVs, particularly sys.dm_exec_query_stats.

    David


    David http://blogs.msdn.com/b/dbrowne/


    Saturday, June 8, 2013 4:17 PM
  • Do a performance test with the help of performance testing team. We do a benchamark of a system before the modifications and re-run the test after the modification. We also collect perfmon counters during the test and compare the results. From the test you will get the response time and no.of failed/passed transactions also. You can choose LR(little expensive) or VSTS for the same.

    Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful.

    Saturday, June 8, 2013 4:31 PM
  • I tried the DMV's but I didn't see anything for my database.

    What is the "PERFORMANCE TESTING TEAM"?


    Alan


    • Edited by anaylor01 Saturday, June 8, 2013 4:41 PM
    Saturday, June 8, 2013 4:36 PM
  • I do not know, Why my previous post has been marked as abusive.Ok, fine.If you would have given me the reason, I will be very happy to see that...

    I will explain a bit more the stratogy a bit more, if it helps you may think of it.

    At Production, you may not be able to evaluate your actual response time, its the reality. There will be parallel requests coming from different users.Hence, it kind of a mixed mode stratogy. Also you said, you can not run the profiler too. What we do is we take 1/4 of the production system as performance testing environment and do the performance test after and before modification in the performance testing environment. As I said, which would give a fair idea of the change that we made in terms of performance.


    Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful.

    Sunday, June 9, 2013 12:25 AM
  • Here is a query from Glen Berry that will give you top IO statements you can then test with compression and without:

    SELECT
    TOP(50) OBJECT_NAME(qt.objectid) AS [SP Name]
    , (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO]
    , SUBSTRING(qt.[text],qs.statement_start_offset/2
    ,	(CASE 
    		WHEN qs.statement_end_offset = -1 
    	 THEN LEN(CONVERT(nvarchar(max), qt.[text])) * 2 
    		ELSE qs.statement_end_offset 
    	 END - qs.statement_start_offset)/2) AS [Query Text]	
    FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
    WHERE qt.[dbid] = DB_ID()
    ORDER BY [Avg IO] DESC OPTION (RECOMPILE);

    SET STATISTICS IO ON should give you lower values with compression than without if your data is good for compression. This does not mean that your queries will be faster because data has to be decompressed before sent to client and decompression utilizes mostly CPU.


    Regards, Dean Savović

    • Marked as answer by anaylor01 Friday, June 14, 2013 2:56 PM
    Monday, June 10, 2013 8:11 AM
  • Basically what I was hoping for is a query that would return read, writes, duration that I could run before compression and then again after compression.

    Alan

    Monday, March 24, 2014 9:30 PM