none
SQL Server Windows NT 64 bit - very high memory usage - Have to restart server every 18 hours - please help. RRS feed

  • Question

  • Hi there,

    On our newly configured SQL Server Windows NT 64 bit - very high memory usage show up.
    We have to reboot it to get it to normal (like to 10% RAM) - RAM usage grows continuously - and in about 18 hours it reaches 95% and our website slows down by a great margin. We reboot the server to get it back to normal and so on.

    During such outage I ran ---- Select * from sys.dm_exec_requests ----- and result is here.

    If I can provide any more information please let me know.

    Thanks so much for help.

    Tuesday, December 1, 2015 7:28 AM

Answers

  • Issue was resolved by setting up SIMPLE Recovery mode for all dbs.

    Thanks so much all.

    • Marked as answer by ran009 Thursday, December 3, 2015 1:07 AM
    Thursday, December 3, 2015 1:07 AM

All replies

  • SQL Server using RAM is not an issue this is normal this is how SQl Server memory is designed to grab as much as possible and hold it unless OS flags low memory notification and then SQL releases it.

    Have you set max server memory limit ?

    Set it to appropriate value


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP


    Tuesday, December 1, 2015 7:43 AM
    Moderator
  • Make sure you set a SQL Server Max Memory limit and allow the operating system and background processes plenty of breathing room (approximately 2-2.5G minimum - but you need to keep an eye on this to find the right level).

    You should also monitor other process memory consumption to make sure that nothing else is consuming large amounts of memory. The reason why SQL should be on a dedicated server is because it will try and use as much memory as required for buffer cache which will grow steadily over time.


    Regards,
    Mark Broadbent.
    Microsoft Certified Master
    Contact me through twitter | blog | sqlcloud


    Please click "Propose as answer" if a post solves your problem
    or/and vote the post up if it's been helpful.

    Tuesday, December 1, 2015 11:53 AM
  • That is completely normal and expected behavior.  As a default SQL Server uses ALL RAM and does not release it unless Windows issues a memory trim function.

    Please see:

    https://msdn.microsoft.com/en-us/library/ms178067(v=sql.120).aspx

    Tuesday, December 1, 2015 2:09 PM
    Moderator
  •          keep trace on memory using following query.
    /**********************************************************  
    *   top procedures memory consumption total  
    *   (this will show more operational procedures)  
    ***********************************************************/  
    SELECT TOP 100 *  
    FROM 
    (  
        SELECT 
             DatabaseName       = DB_NAME(qt.dbid)  
            ,ObjectName         = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid)  
            ,DiskReads          = SUM(qs.total_physical_reads)   -- The worst reads, disk reads  
            ,MemoryReads        = SUM(qs.total_logical_reads)    --Logical Reads are memory reads  
            ,Total_IO_Reads     = SUM(qs.total_physical_reads + qs.total_logical_reads)  
            ,Executions         = SUM(qs.execution_count)  
            ,IO_Per_Execution   = SUM((qs.total_physical_reads + qs.total_logical_reads) / qs.execution_count)  
            ,CPUTime            = SUM(qs.total_worker_time)  
            ,DiskWaitAndCPUTime = SUM(qs.total_elapsed_time)  
            ,MemoryWrites       = SUM(qs.max_logical_writes)  
            ,DateLastExecuted   = MAX(qs.last_execution_time)  
       
        FROM sys.dm_exec_query_stats AS qs  
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt  
        GROUP BY DB_NAME(qt.dbid), OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid)  
    ) T  
    ORDER BY Total_IO_Reads DESC 
       
    /**********************************************************  
    *   top adhoc queries memory consumption total  
    ***********************************************************/  
    SELECT TOP 100 *  
    FROM 
    (  
        SELECT 
             DatabaseName       = DB_NAME(qt.dbid)  
            ,QueryText          = qt.text  
            ,DiskReads          = SUM(qs.total_physical_reads)   -- The worst reads, disk reads  
            ,MemoryReads        = SUM(qs.total_logical_reads)    --Logical Reads are memory reads  
            ,Total_IO_Reads     = SUM(qs.total_physical_reads + qs.total_logical_reads)  
            ,Executions         = SUM(qs.execution_count)  
            ,IO_Per_Execution   = SUM((qs.total_physical_reads + qs.total_logical_reads) / qs.execution_count)  
            ,CPUTime            = SUM(qs.total_worker_time)  
            ,DiskWaitAndCPUTime = SUM(qs.total_elapsed_time)  
            ,MemoryWrites       = SUM(qs.max_logical_writes)  
            ,DateLastExecuted   = MAX(qs.last_execution_time)  
       
        FROM sys.dm_exec_query_stats AS qs  
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt  
        WHERE OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid) IS NULL 
        GROUP BY DB_NAME(qt.dbid), qt.text, OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid)  
    ) T  
    ORDER BY Total_IO_Reads DESC 
       
    /**********************************************************  
    *   top adhoc queries memory consumption per execution  
    ***********************************************************/  
    SELECT TOP 100 *  
    FROM 
    (  
        SELECT 
             DatabaseName       = DB_NAME(qt.dbid)  
            ,QueryText          = qt.text  
            ,DiskReads          = SUM(qs.total_physical_reads)   -- The worst reads, disk reads  
            ,MemoryReads        = SUM(qs.total_logical_reads)    --Logical Reads are memory reads  
            ,Total_IO_Reads     = SUM(qs.total_physical_reads + qs.total_logical_reads)  
            ,Executions         = SUM(qs.execution_count)  
            ,IO_Per_Execution   = SUM((qs.total_physical_reads + qs.total_logical_reads) / qs.execution_count)  
            ,CPUTime            = SUM(qs.total_worker_time)  
            ,DiskWaitAndCPUTime = SUM(qs.total_elapsed_time)  
            ,MemoryWrites       = SUM(qs.max_logical_writes)  
            ,DateLastExecuted   = MAX(qs.last_execution_time)  
       
        FROM sys.dm_exec_query_stats AS qs  
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt  
        WHERE OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid) IS NULL 
        GROUP BY DB_NAME(qt.dbid), qt.text, OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid)  
    ) T  
    ORDER BY IO_Per_Execution DESC 
    Once you identity the query, Look at its execution plan and then optimise the query as necessrary by creating the missing indexes etc.
    Tuesday, December 1, 2015 2:55 PM
  • Where did you copied it from and how is this related to SQL Server memory explanation ?

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP

    Tuesday, December 1, 2015 3:08 PM
    Moderator
  • Yes MAX is set as 28 GB and MIN is 5GB.

    Any other clues please?

    Thanks.

    Tuesday, December 1, 2015 10:15 PM
  • Hi retracement,

    Yup, I already have this enabled.

    Any other thoughts?

    Thanks.

    Tuesday, December 1, 2015 10:17 PM
  • Hi Dinesh

    This is a SharePoint database so we cannot change anything in terms of query.

    Any other thoughts?

    Thanks a lot.

    Tuesday, December 1, 2015 10:20 PM
  • Yes MAX is set as 28 GB and MIN is 5GB.

    Any other clues please?

    Thanks.

    Ok..So how much is SQL server utilizing. Can you post output of below query cleanly

    select * from sys.dm_os_process_memory

    if it is utilizing somewhere around 28 this is normal then. Because this is what you have allocated it.


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP

    Wednesday, December 2, 2015 7:21 AM
    Moderator
  • Set your Minimum Memory limit to be a little bit higher.

    Maybe SQL is having to surrender too much memory?

    Setting Max will stop SQL using too much, but setting Minimum will stop SQL from releasing too much.

    Incidentally, I dont think you need to reboot the server, just restart the services and the memory should be freed up. This is much faster than a reboot.

    Wednesday, December 2, 2015 12:39 PM
  • How much physical RAM does the server have?  The simplest solution is to set the "Max Server memory" smaller.

    Wednesday, December 2, 2015 3:30 PM
    Moderator
  • Incidentally, I dont think you need to reboot the server, just restart the services and the memory should be freed up. This is much faster than a reboot.

    You don't even have to restart the service, it adjusts dynamically!

    ... though if you've been using too much and are already paging, I'm not sure how smart it is about which pages to release, and it might be better to restart anyway.

    OP, check your paging rate.  I assume it's near zero just after a restart, and when the paging rate goes up you see the performance problem.  Setting the max memory so it reduces paging to near zero, is the goal.

    Josh

    Wednesday, December 2, 2015 7:40 PM
  • Issue was resolved by setting up SIMPLE Recovery mode for all dbs.

    Thanks so much all.

    • Marked as answer by ran009 Thursday, December 3, 2015 1:07 AM
    Thursday, December 3, 2015 1:07 AM
  • Issue was resolved by setting up SIMPLE Recovery mode for all dbs.

    Thanks so much all.

    This is totally incorrect answer, memory CANNOT be affected by recovery model in such a way that you change recovery model and it drops down. May be you have restarted the SQL server so it may appear memory is released. Recovery model controls logging and recovery.

    By setting up simple recovery mode you losse point in time recovery do you know that ? Did you discussed before changing recovery model


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP

    Thursday, December 3, 2015 7:08 AM
    Moderator
  • As Shanky said, this is not a solution.   

    The recovery model does not affect RAM usage it only affects the amount of time the data in the log file is saved.

    Were you referring to disk usage or RAM usage?

    Thursday, December 3, 2015 4:02 PM
    Moderator
  • Guys, you can't tell the OP what is right and wrong answer. Probably he had slowness, he was thinking it is memory related. But the bottleneck was probably inadequate disk setup. By not using transaction logs he probably speeded it up. SQL server taking all memory it can, is by design, should not have been thought as a problem to begin with.
    Thursday, December 3, 2015 4:29 PM
  • Under very odd circumstances setting the recovery model to simple can release memory, however this seems to be more on neglected databases (for the disbelievers you can google :D )

    Personally I would not of set the databases to simple as now you have lost your restore Point in time chain. I would of done a log backup then shrink.

    How often are these databases backed up normally and do you preform housekeeping functions on them (i.e. backups, fragmentation ect)

    How large are the databases ?

    Is there any operations on the databases such as replication and or mirroring ?

    Secondly is the website and the database on the same server ? If that is the case then its rather bad design, as the sharepoint application itself can be quiet inefficient and resource hungry.

    The guys are right though SQL Server will always try and use the amount of Max Memory given to it, and if that means the server gets full then the server gets full.

    Finally are you totally sure that it is SQL Server that is causing the problem ?, the way to check is as previously suggested stop and restart the service and check the performance. If performance does not drastically improve the the problem is not SQL (even if it was using the memory).


    Thursday, December 3, 2015 4:54 PM
  • You are probably referring to the flushing of the plan cache as raised by myself here SQL Plan Cache is flushed for a database post ALTER DATABASE operation

    Regards,
    Mark Broadbent.
    Microsoft Certified Master
    Contact me through twitter | blog | sqlcloud


    Please click "Propose as answer" if a post solves your problem
    or/and vote the post up if it's been helpful.

    Thursday, December 3, 2015 5:22 PM
  • Pete,

    You are correct, but on Forums we try to stick to suggesting best parctice. Would you change recovery model to simple on prod to relase memory, I guess anybody on thread would not :). The whole point is SQL Server taking memory IS NOT A ISSUE. So why change anything ?

    You have lots of method to release memory, lots of DBCC commands go try whatever you want and "hit Axe on your own legs".

    I still dont agree to solution of OP. And I would make it clear for anyone reading this thread. Please dont do it.


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP


    Friday, December 4, 2015 7:39 AM
    Moderator
  • Seconded. Shanky is right, best practice is called best practice for a reason!

    Regards,
    Mark Broadbent.
    Microsoft Certified Master
    Contact me through twitter | blog | sqlcloud


    Please click "Propose as answer" if a post solves your problem
    or/and vote the post up if it's been helpful.

    Friday, December 4, 2015 8:28 AM
  • Pete,

    You are correct, but on Forums we try to stick to suggesting best parctice. Would you change recovery model to simple on prod to relase memory, I guess anybody on thread would not :). The whole point is SQL Server taking memory IS NOT A ISSUE. So why change anything ?

    You have lots of method to release memory, lots of DBCC commands go try whatever you want and "hit Axe on your own legs".

    I still dont agree to solution of OP. And I would make it clear for anyone reading this thread. Please dont do it.


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP


    Hello Shanky

    If there are other applications on the same server as the database, such as say a website, then SQL taking up all the memory would impact the website, this would mean that SQL Server was configured incorrectly, as what was pointed out by others and I agreed with.

    As for best practice I did say very clearly that putting it into simple mode was the WRONG choice and I did not agree to it, I did state that in some cases it would release memory, I said that as I felt that ran was being attacked for his decision which is what worked for him.

    As for best practice, best practice is what you strive for but when you have an emergency situation then you do what you can to fix it, no matter the route.

    Best Practice in this case would be to have a database on its own server backup up on a regular basis with defragmentation job and jobs to check the consistency of data.



    Friday, December 4, 2015 10:20 AM
  • Seconded. Shanky is right, best practice is called best practice for a reason!

    Regards,
    Mark Broadbent.
    Microsoft Certified Master
    Contact me through twitter | blog | sqlcloud


    Please click "Propose as answer" if a post solves your problem
    or/and vote the post up if it's been helpful.

    Thanks Mark

    If you check the wiki definition of Best Practice you will see this line "Sometimes a "best practice" is not applicable or is inappropriate for a particular organization's needs. A key strategic talent required when applying best practice to organizations is the ability to balance the unique qualities of an organization with the practices that it has in common with others."

    Thanks

    Peter

    Friday, December 4, 2015 10:29 AM
  • Peter,

    I don't quite understand your need to try and argue or contradict the comments posted by myself or Shanky which are made purely for the purpose of helping the MSDN audience and (in particular) the questioner.

    I don't think either of us are looking to elicit a reaction from you, and speaking as myself (and suspect Shanky may feel the same) would prefer to spend time answering the questions precisely and correctly rather than entering into any unnecessary waste of energy between us.

    I've looked through your responses to the questioner again and subsequent follow ups and I'm afraid I fail to see any relevance to much of what you have suggested to the question in hand (which was mostly answered by others already) and some of which is simply plain bad advice in my opinion for example:

    Finally are you totally sure that it is SQL Server that is causing the problem ?, the way to check is as previously suggested stop and restart the service and check the performance. If performance does not drastically improve the the problem is not SQL (even if it was using the memory).

    -To suggest restarting SQL Service to determine if SQL is the problem is perhaps the worst advice I have seen for troubleshooting memory shortfalls on a server.

    You will probably disagree again and thats your prerogative but it doesn't mean that incorrect or misleading responses cannot be flagged as such for the sake of the questioner or MSDN audience (who are the ones we are ALL trying to help).

    Thanks for the description on best practice, although it might surprise you to know that I already understand the term and use (and funnily enough didn't even need to look at Wikipedia to understand it).

     

    Regards,
    Mark Broadbent.
    Microsoft Certified Master
    Contact me through twitter | blog | sqlcloud


    Please click "Propose as answer" if a post solves your problem
    or/and vote the post up if it's been helpful.

    Friday, December 4, 2015 12:50 PM
  • Many comments are right that this canNOT have an impact on memory.

    This turned  out to be a red herring though. The Full recovery data model had logs overflowing and that would need us to reboot  the server.

    Once DB Recovery model was set to simple - the logs did not overflow and things worked fine.

    Thanks.

    Friday, December 4, 2015 6:46 PM
  • Alright, so you had an emergency with logs exhausting your disk space, and that in turn made your server super slow?

    You had to get rid of the logs, so you put the recovery model to simple and voila the logs were gone...

    But by doing that, you lost the possibility of restoring your database to a point in time.

    Taking a transaction log backup (and scheduling that regularly) would do the same, except you would not lose the ablity to restore to a point in time.

    However, if restoring to your latest full backups is good enough, and you can live with losing the data since the backup, then you can use simple recovery model.


    Christmas spirit all year long Please remember to vote on useful replies. Mark answers. Help answer others while you are visting. Thank you.


    Sunday, December 6, 2015 10:03 PM
  • we have exactly same issue and pain with sharepoint 2013 based portal where most of the db's are set to full recovery. We have to restart sql service after 10 days(eats 120 GB like godzilla)

    Can we use every 4 hour backup plan for few critical DBS and make all dbs simple.

    In our other projects where all DB's are set to simple we never faced this issue.


    MCTS Sharepoint 2010, MCAD dotnet, MCPDEA, SharePoint Lead

    Saturday, March 5, 2016 1:15 PM
  • The recovery model is generally unrelated to memory usage.  Please start a new thread describing your actual issue.

    David


    David http://blogs.msdn.com/b/dbrowne/

    Saturday, March 5, 2016 1:21 PM
  • we have exactly same issue and pain with sharepoint 2013 based portal where most of the db's are set to full recovery. We have to restart sql service after 10 days(eats 120 GB like godzilla)

    What happens if you don't restart SQL Server?

    SQL Server grabbing as much memory as it can is an expected behaviour, since it is designed that way. So from what you say there is no cause for alarm.

    As David said, start a new thread and describe you problem from start to end.

    Saturday, March 5, 2016 6:00 PM
  • What happens if you don't restart SQL Server?

    everything becomes almost dead slow.. Once we restart the service memory consumption goes to 6% and then it can live for another 10 days..


    MCTS Sharepoint 2010, MCAD dotnet, MCPDEA, SharePoint Lead


    Sunday, March 6, 2016 12:17 AM
  • What happens if you don't restart SQL Server?

    everything becomes almost dead slow.. Once we restart the service memory consumption goes to 6% and then it can live for another 10 days..


    MCTS Sharepoint 2010, MCAD dotnet, MCPDEA, SharePoint Lead


    Like David said please create a new thread, its not advised to piggyback on old thread you would get better response

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Wiki Articles

    MVP

    Sunday, March 6, 2016 5:59 AM
    Moderator