none
Sudden drop in Page Life Expectancy

    Question

  •  I have sudden drops in PLE:

    PLE:         DateTime:
    33             2009-02-25 08:37:00.140
    83             2009-02-25 08:36:00.797
    25             2009-02-25 08:35:00.840
    15962         2009-02-25 08:34:02.430
    15901         2009-02-25 08:33:00.180
    15842         2009-02-25 08:32:00.693


    How can I figure out what is responsible for this drop? I have 6 GB RAM and I'm using SQL Server 2005. I read a couple of articles and most of them are pointing on memory pressure... Therefore I tried to figure out which script is using a lot of memory. I tried Michael Zilberstein's script (http://sqlblog.com/blogs/michael_zilberstein/archive/2008/09/09/Query-performance-troubleshooting-in-SQL-Server-2008_3A00_-query_5F00_hash-and-query_5F00_plan_5F00_hash.aspx). I think I have a couple of scripts which generate a lot of I/O:

    Top 5 scripts in avg. IO:
    6147073
    5196743
    5189347
    5177903
    3864497

    What does this IO mean? Page reads? How does this affect my memory?

    Any help is highly appreciated.

    Thank you!

    Wednesday, February 25, 2009 4:57 PM

Answers

  • In my opinion, you don't need to run profiler, you have the information you need in the DMV's based on what you have posted.  There are plenty of ways to go about looking at this, unless you aren't finding what you need in the DMV's, I wouldn't jump into profiler to find it.  Having your PLE drop significantly, if it is being caused by a query would be available in the DMV's.  I wouldn't work on the IO calculations that were done in the blog you referenced because it takes in logical_reads and physical_writes. I would focus more on what is pulling high physical_reads which is where data is being pulled up into the BPool.

    SELECT TOP 10   
       qs.execution_count,   
       AvgPhysicalReads  = isnull( qs.total_physical_reads/ qs.execution_count, 0 ),  
       MinPhysicalReads  = qs.min_physical_reads,  
       MaxPhysicalReads  = qs.max_physical_reads,  
       AvgPhysicalReads_kbsize  = isnull( qs.total_physical_reads/ qs.execution_count, 0 ) *8,  
       MinPhysicalReads_kbsize  = qs.min_physical_reads*8,  
       MaxPhysicalReads_kbsize  = qs.max_physical_reads*8,  
       CreationDateTime = qs.creation_time,  
       SUBSTRING(qt.[text], qs.statement_start_offset/2, (   
           CASE    
               WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) * 2    
               ELSE qs.statement_end_offset    
           END - qs.statement_start_offset)/2    
       ) AS query_text,   
       qt.[dbid],   
       qt.objectid,   
       tp.query_plan,  
       tp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan";  
    /ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes') missing_index_info  
    FROM    
       sys.dm_exec_query_stats qs   
       CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt   
       OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp   
    ORDER BY AvgPhysicalReads DESC  


    Look at those queries, and the missing_index_info included in the XML.  You want to target what is doing high physical reads into memory, not what is working out of memory logically.

    One other thing to look for is working set trims caused by external memory pressure.  Check your SQL ErrorLog for events that say the working set is being trimed for the SQL Server instance.
    Jonathan Kehayias
    http://sqlblog.com/blogs/jonathan_kehayias/
    http://www.sqlclr.net/
    Please click the Mark as Answer button if a post solves your problem!
    Thursday, February 26, 2009 6:44 PM
    Moderator

All replies

  • Maybe there's just some indexes missing, which causes scans, which in turn causes page life to drop. I would run a server trace that records all queries with more than 1000 reads. Then simply check their execution plans...
    Bodo Michael Danitz - MCITP Database Administrator - free consultant - performance guru - www.sql-server.de
    Wednesday, February 25, 2009 5:05 PM
  • No need to do a server side trace for this.  Just add a where clause to your query that checks the plan for the existence of missing index information:

    WHERE  tp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan";
    /ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes') = 1  
     
     
    Then click on the Execution plan XML and you can see the missing index information inside the XML.  This isn't easy to do necessarily, but fortunately Microsoft made great strides in SSMS 2008 that makes this a breeze.  If you don't have 2008 yet, you can download the tools for SQL Express, and then when you open an execution plan, in green text it will have the information for Missing Indexes, and you can right click on it to get scripts to create those missing indexes.

    SQL Server 2008 Management Studio Express
    Jonathan Kehayias
    http://sqlblog.com/blogs/jonathan_kehayias/
    http://www.sqlclr.net/
    Please click the Mark as Answer button if a post solves your problem!
    Wednesday, February 25, 2009 5:28 PM
    Moderator
  •  Thanks guys.

    >>I would run a server trace that records all queries with more than 1000 reads.
    I have more than 1500 queries with more than 1000 reads...

    The problem is my users can compose their own request (= different queries).

    >>Just add a where clause to your query that checks the plan for the existence of missing index information:
    The problem is similar... Every query suggests another index!

    I always have a bunch of indexes which are hit quite often... I think I need a "long term" analysis. I already tried with DTA but I didn't get the results I expected.

    Is this just about indexes or could I use anything else? Stats?
    Thursday, February 26, 2009 9:56 AM
  • >>The problem is my users can compose their own request (= different queries).

    My experience is, that the users generate a certain routine so that a hand full of indexes should match the majority of queries. However I don't believe you can find these indexes by using DTA. Also, a single index can match a whole bunch of queries, and creating each and every index that DTA suggests can easily lead to many redundant indexes which in turn slows down transactions. So one should concentrate on thos queries, that make up the highest server load.

    Can you correlate the trace with perfmon and the drop of page life? Maybe there's some very heavy queries (>>10,000 reads) or you find an increse of traffic? You'd now have to find out which queries make up the highest server load in terms of reads. Look at their execution plans: Do you find scans?


    Bodo Michael Danitz - MCITP Database Administrator - free consultant - performance guru - www.sql-server.de
    Thursday, February 26, 2009 10:56 AM
  • >> creating each and every index that DTA suggests can easily lead to many redundant indexes which in turn slows down transactions
    That's exactly what I've experiencted.

    >>Can you correlate the trace with perfmon and the drop of page life?
    How would you do that? Running profiler for a while? I'm afraid it needs too much perfomance for itself...

    >>Maybe there's some very heavy queries (>>10,000 reads)
    Yes, there are a couple of such queries. As I wrote in the opening thread I've got the following Top 5:

    Top 5 scripts in avg. IO:
    6147073
    5196743
    5189347
    5177903
    3864497

    Therefore you suggest to investigate in these heavy scripts, don't you?
    Thursday, February 26, 2009 5:24 PM
  • In my opinion, you don't need to run profiler, you have the information you need in the DMV's based on what you have posted.  There are plenty of ways to go about looking at this, unless you aren't finding what you need in the DMV's, I wouldn't jump into profiler to find it.  Having your PLE drop significantly, if it is being caused by a query would be available in the DMV's.  I wouldn't work on the IO calculations that were done in the blog you referenced because it takes in logical_reads and physical_writes. I would focus more on what is pulling high physical_reads which is where data is being pulled up into the BPool.

    SELECT TOP 10   
       qs.execution_count,   
       AvgPhysicalReads  = isnull( qs.total_physical_reads/ qs.execution_count, 0 ),  
       MinPhysicalReads  = qs.min_physical_reads,  
       MaxPhysicalReads  = qs.max_physical_reads,  
       AvgPhysicalReads_kbsize  = isnull( qs.total_physical_reads/ qs.execution_count, 0 ) *8,  
       MinPhysicalReads_kbsize  = qs.min_physical_reads*8,  
       MaxPhysicalReads_kbsize  = qs.max_physical_reads*8,  
       CreationDateTime = qs.creation_time,  
       SUBSTRING(qt.[text], qs.statement_start_offset/2, (   
           CASE    
               WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) * 2    
               ELSE qs.statement_end_offset    
           END - qs.statement_start_offset)/2    
       ) AS query_text,   
       qt.[dbid],   
       qt.objectid,   
       tp.query_plan,  
       tp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan";  
    /ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes') missing_index_info  
    FROM    
       sys.dm_exec_query_stats qs   
       CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt   
       OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp   
    ORDER BY AvgPhysicalReads DESC  


    Look at those queries, and the missing_index_info included in the XML.  You want to target what is doing high physical reads into memory, not what is working out of memory logically.

    One other thing to look for is working set trims caused by external memory pressure.  Check your SQL ErrorLog for events that say the working set is being trimed for the SQL Server instance.
    Jonathan Kehayias
    http://sqlblog.com/blogs/jonathan_kehayias/
    http://www.sqlclr.net/
    Please click the Mark as Answer button if a post solves your problem!
    Thursday, February 26, 2009 6:44 PM
    Moderator