maxdop different cpu counts? RRS feed

  • Question

  • Hello

    I have googled and read a ton about the MAXDOP setting in SQL Server. I'm trying to enforce a "standard base setting" when installing new SQL instances. Of course, they could always be changed afterwards if we see the need of it. But from the start, I want to have a "good practice" setting set.

    But I'm getting really confused about how to count my processors. physical, logical, hyperthreading...

    How should I implement my script? Should I calculate using physical or logical CPU? What defines a logical CPU??

    DECLARE @coreCount tinyint,
    		@maxDOP tinyint;
    SET @coreCount = (SELECT cpu_count / hyperthread_ratio FROM sys.dm_os_sys_info);
    SET @maxDOP =
    	WHEN @coreCount = 1 THEN 0
    	WHEN @coreCount = 2 THEN 0
    	WHEN @coreCount = 4 THEN 2
    	WHEN @coreCount = 8 THEN 4
    	WHEN @coreCount > 8 THEN 8

    This is what I'm using at the moment. Is it totally wrong?

    select cpu_count, hyperthread_ratio from sys.dm_os_sys_info gives me the following:

    cpu_count = 48

    hyperthread_ratio = 12

    And when I look inside Windows (properties on my computer) it says 4 processors.

    Can anyone please give me advice on how I should count and what the maxdop should be set to?

    Friday, June 29, 2012 8:48 AM

All replies

  • So my "starting point" for a rule of thumb is to set MAXDOP to no more than the number of cores that are in a single NUMA node, or 8, whichever is the first ceiling.

    I will then take into consideration what type of workload (OLTP, DW, ETL etc.) the environment is serving in order to adjust from there on in.

    John Sansom | SQL Server DBA Blog | @SQLBrit on Twitter

    Friday, June 29, 2012 9:09 AM

    I think it is very dangerous to define a global rule for MAXDOP only depending on your CPU configuration.

    It all depends on your workload. I had clients who had far better results using a limited MAXDOP where others experienced just the opposite.

    I always start using MAXDOP=0 and for most cases (certainly DWH) this doesn't need any changes.

    Geert Vanhove

    Friday, June 29, 2012 12:14 PM
  • Playing with MAXDOP is quite problematic from performance point of view. Leave it at default level so that queries can use parallelism if required AND you can check queries going to parallelism, analysis them to use proper indexes.


    • Edited by RohitGarg Friday, June 29, 2012 12:41 PM
    Friday, June 29, 2012 12:41 PM
  • Unless you are having an issue, I would recommend leaving the setting alone.

    Please see this thread:


    Friday, June 29, 2012 1:42 PM
  • select cpu_count, hyperthread_ratio from sys.dm_os_sys_info gives me the following:

    cpu_count = 48

    hyperthread_ratio = 12

    And when I look inside Windows (properties on my computer) it says 4 processors.

    You have four chips.

    Each chip has six cores, and hyperthreading is turned on, *or* each chip has twelve cores and hyperthreading is turned off.  You can't tell the difference from what SQL Server tells you.  If you go to the Windows task manager you should see 48 CPU performance graphs.  

    What kind of workload do you have?  For OLTP you'd probably want to set maxdop lower, for analytic/batch loads probably higher, for mixed loads, well, you do the math!

    I disagree with Tom, when you get above the 8-core level, I don't think you *ever* want to leave the global maxdop setting at zero - tho frankly, I'm not sure, not sure of the behaviors of the more current versions of SQL Server.  IF SQL Server has figured out how to detect NUMA settings and limit its own parallel plans accordingly, then well someone should tell me so.  I mention this because I thought I saw some behaviors on one of my own servers that looked like it *might* be doing that, but I've never read anywhere that this was the case.

    Assuming that SQL Server is still waiting for you to set maxdop or else it will potentially use ALL cores for parallel queries, you want to get the NUMA number for your server, but if you have to guess (or it doesn't have NUMA) I'd go with maybe 6 for an OLTP workload or 12 for an analytic workload.  Of course that also assumes you have all 48 cores allocated to this instance of SQL Server, if you assign fewer, than you might want to adjust these numbers down, trying to keep a factor that would divide to stay on one chip.  Is SQL Server smart enough to do that?  I'm not sure, I think I recall seeing a post here that says it is that smart, it gets multiple threads on a chip when it can.

    If hyperthreading is turned on, you might just as well consider them real cores, for the purposes of scheduling.



    • Edited by JRStern Friday, June 29, 2012 3:53 PM
    Friday, June 29, 2012 3:53 PM
  • Thanks for the replies.
    However, I never got an answer to if I should count on physical or logical CPU (sockets or cores)?
    The reason I use this script to set standard maxdop setting is because Microsoft has been here and evaluated our environment, and they said we should do this. But I can't remember whether they said to calculate on real sockets or logical cores..

    And just like I stated in my original post, this value is just a standard base setting which could then be adjusted to whatever value seems most appropriate if deemed necessary.

    Right now I calculate on the cores, 48 in this case (4 sockets á 12cores each).
    I think it's the right way. Though I don't know how this will play out if hyperthreading is enabled on some server..

    Thanks anyway.
    Thursday, July 5, 2012 8:13 AM
  • The main consideration for DOP on OLTP Systems is following :

    if you get a non trivial query plan, means a plan which can be parallelized, the Query engine will try to find a GOOD plan to parallelize it and will start with a maximum of threads = MAXDOP setting. It will continue to decrease this value until a specific amount of time is reached. Then the query will run with this "good" plan, and if you have bad luck, you have a lot of parallel processing overhead as you probably have a query which would run quite nice with a single thread execution plan, but couldn't get to the point to serialize. So afair number of cores within i a cpu is a good starting point because when your query is parallized at least you will have a good chance that it runs on one physical cpu, and you do won't have that many numa context switches (=par. overhead)
    Thursday, July 5, 2012 8:55 AM
  • Please see these guidelines from MS on MAXDOP.


    Thursday, July 5, 2012 1:51 PM