Special considerations for 32-cpu server


  • Get this, we've finally gotten some new hardware for our incredibly busy Central Administration Site (CAS) in SCCM but it includes a server that is a 32-CPU box!  Well, I guess to be more specific it's 32-CORES.  I don't know if they're 4x8 or 8x4 and I guess it doesn't really matter to me.  The cost is nuts either way.

    The server will also have 256GB of RAM and tons of multi-path, top tier SAN LUNs so memory and disk I/O shouldn't be an issue.  But we'll keep an eye on it.  We can fit 2TB of RAM in that box if it came to it <alien drool>

    So, the question is...are there any special considerations when configuring SQL 2012 on a 32-CPU box to make the most of that hardware?

    Things I'm mulling over:

    The last I heard, even if you have more than 8 processors, you still set MAXDOP to only 8 otherwise you end up with too much context switching when the OS is managing that many work queues.  Is that still valid with SQL 2012 on a 32-CPU box or is there a more complicated answer when you go that high?

    Obviously, we were planning to stay with Enterprise Edition to handle those many CPUs...but what about BI edition?  I do intend to possibly do some BI work off it someday as well since the CAS on SCCM is really intended to be the central reporting and administration server for the enterprise in CM12.  Are there any restrictions on how SQL behaves in BI edition or BI behaves in Enterprise edition that would cause me to want one over the other?

    We're going to have gobs and gobs of space on multi-path, top tier SAN LUNs (multiple LUNs, mulitple cache, multiple paths, movable workloads) so disk I/O shouldn't be an issue of the SAN folks deliver on what they're promising.  With that in mind, I have had great luck for the last 4-5 years by setting up 1 data file per CPU and 1 tempDB data file per CPU on my 8-way boxes.  So 8 data files in the user DB and 8 data files in the tempDB, spread across multiple LUNS to remove disk contention.  So, does the discussion of the number of data files get murky with 32 CPUs?  I wouldn't think 32 data files is really beneficial...or is it?  Again, with an absence of disk I/O contention, what would give us the best utilization of the CPUs and RAM and SAN?  If they're ponying up for this kind of hardware, I don't want some lame bottleneck making this thing look like it's sitting there idle all day. 

    I believe it's a pretty common practice in most places to set power management to PERFORMANCE or FULL POWER or TURBO or whatever in order to not take a bath on the CPU stepping.  I intended to keep that practice alive, unless there's something new with Server 2012 and SQL 2012 that would eliminate that benefit?

    Perhaps there's something else that I'm missing which a person needs to consider when trying to get the most out of a gynormous server...maybe BIOS settings that are relevant or some SQL trace flags?

    I'm open to any ideas.  I just really haven't had to scale that far beyond 8 way boxes before.

    Number2 - (John Nelson)
    Microsoft MVP (2009) - System Center Configuration Manager

    Tuesday, December 18, 2012 11:33 PM


All replies

  • Hello,

    About MAXDOP is about physical processors not cores. Use the SELECT statement on the following post to know to set optimal value for MAXDOP.

    Enterprise Edition (Server/CAL) is limited now to 20 cores. There is an Enterprise Core Edition that does not have limits on the number of cores. For more information, please see the following article:

    The number of data files is related to Tempdb. I would follow suggestion by Paul Randal on that matter of 1/2 or 1/4 the number of processors.

    Please use RAID 10 on the storage dedicated to databases. Format drives cluster size 64 K.

    About Power Management, I would suggest you to choose high performance when setting Power Management on SQL Server servers, to make sure the server will show best performance.

    Hope this helps.

    Alberto Morillo

    • Proposed as answer by Shahfaisal Muhammed Wednesday, December 19, 2012 4:34 AM
    • Marked as answer by Number2 Wednesday, December 19, 2012 3:59 PM
    Wednesday, December 19, 2012 2:46 AM
  • Cool.  No surprises then. The extra info about the tempDB from Paul was very informative.  Thanks!

    As I read about the MAXDOP info...I still think the number of cores is important.  The way I read it, what they're saying is when you have hyperthreading turned on and the system sees...say...16 cores on a dual-quad core (which has 8 physical cores), then you count it as 8, not 16.

    These days, a quad core on a single socket performs the same as 4 single CPUs in their own sockets.  According to the MAP team, when they discover CPU counts, they count a dual-core processor as 2 physical CPUs, but if hyperthreading is turned on, they count it as 2 physical CPUs/4 logical CPUs.  Because each real core has all the processing guts of a standalone CPU and can do the same amount of work on both cores at the same time.  On the other hand, hyperthreading can really only SOMETIMES do work on both sides of that pipeline as if there were two processors...In other words, 2 cores can always to any CPU work at the same time but 2 hyperthreading cores can only SOMETIMES do SOME CPU work at the same time so it's not a good idea to treat hyperthreading cores in the overall CPU count, but it is OK to tread separate cores in the overall count.

    That's how I read all that anyway.

    Thanks for your time!

    Number2 - (John Nelson)
    Microsoft MVP (2009) - System Center Configuration Manager

    Wednesday, December 19, 2012 3:59 PM
  • Hello John,

    Thank you for visiting MSDN forum!

    Have a great day!

    Alberto Morillo

    Wednesday, December 19, 2012 9:24 PM