none
Help with SQL Server consolidations RRS feed

  • Question

  • Hi

    I have multiple physical and virtual database servers that we are migrating to AWS. To get my memory requirements I got the memory each database used and added that together. Based on that I was suggesting 4 servers at 244GB each. The company want to reduce this if they can. I know you can get the page life expectancy but is there a way to break it down to database?

    I am not sure if each database needs the memory it is using and I guess I won't be able to test this until I go live. I don't want to under spec and then have the disks/cpu not being able to cope with the demand.

    Thanks

    Wednesday, September 21, 2016 4:05 PM

Answers

All replies

  • Hello,

    The followings article may help you establish the capacity requirements of your databases on AWS:

    https://www.brentozar.com/archive/2014/06/sizing-sql-server-aws/   

    Hope this helps.



    Regards,

    Alberto Morillo
    SQLCoffee.com

    • Proposed as answer by Teige GaoModerator Thursday, September 22, 2016 7:00 AM
    • Marked as answer by SG_87 Wednesday, September 28, 2016 8:35 AM
    Wednesday, September 21, 2016 7:13 PM
    Moderator
  • Hi SG_87,

    Based on my experience, we cannot break down page life expectancy for single database, besides, we need to reduce the memory at the risk of degrading performance, if you want to reduce memory, we suggest you to take a real-time monitoring on database.

    This is difficult if we’re trying to measure the size of the active working set of data, but there are a few easy ways that we can get most of the way there. We can use the following method to measure it:

    •    On a single CPU socket system, please use the following code:

    SELECT  object_name,
    counter_name,
    FROM    sys.dm_os_performance_counters
    WHERE   LTRIM(RTRIM(object_name)) = 'SQLServer:Buffer Manager'
    AND LTRIM(RTRIM(counter_name)) = 'Page life expectancy' ;


    •    More than one CPU socket, please use the following code:

    SELECT  object_name,
    counter_name,
    instance_name AS [NUMA Node] ,
    cntr_value AS [value]
    FROM    sys.dm_os_performance_counters
    WHERE   LTRIM(RTRIM(object_name)) = 'SQLServer:Buffer Node'
    AND LTRIM(RTRIM(counter_name)) = 'Page life expectancy' ;


    For more information, please review this article.

    Regards,
    Teige
    Thursday, September 22, 2016 7:31 AM
    Moderator
  • You need to also calculate what you current IOPS and latency figures are for all your current SQL Server servers.  You'll find that the IOPS in AWS are restricted by the size of the disks AND the size of the machines.

    Be also aware that the IOPS are measure in 4K for what AWS publish, so if you have 64K formatted disks then under the covers that is going to consume more of your IOPS.

    You might find that to get the required disk performance you might even have to go larger than the machine sizes that you have calculated so far.


    Martin Cairney SQL Server MVP

    Thursday, September 22, 2016 7:55 AM
  • Thanks all for your help. One of my server has 1TB of memory with 2 instances running. One instance has a PLE of 183741 and the other is 313247. Maybe I have missed something that has already been said but how do you know how much memory each instance really needs? Is it just a case of testing or are there calculations I can do?


    Thursday, September 22, 2016 11:52 AM
  • For example. If I have a DB server and the Instance has 30GB and PLE is 5000, if I reduce the memory to 15GB is PLE going to be 2500?

    These instances are SQL Server 2008 R2. Does anyone have any advice on how I can simulate a live workload from one database and run it against multiple databases?

    Thursday, September 22, 2016 12:28 PM