none
SQL Server 2014 Setup as a Hyper-V VM.

    Question

  • Right now I manage a 37Gb (.mdf) database using SQL 2008 R2 server residing on physical Windows Server 2008 R2 server.  The recovery mode is set to simple, so the .ldf files are relatively small (14Gbs for the active and archive DBs).  The .mdf files (active and archive) reside on a RAID10 array and take up approx. 250Gbs, the .ldf and tempdb (25Gbs) reside a separate RAID10 array.  Everything has been working great for the past 7 years.

    Now I need to refresh the hardware and upgrade, etc.

    I want to virtualize the SQL 2014 installation and also another .NET server and file storage on the same Hyper-V host.

    So, to simplify the discussion, I am interested in SSDs to take advantage of the quicker response, etc.

    I want the new server to allow the SQL server to function as fast as possible.  Dell Support indicated disk usage on our existing SQL server is read intensive.  

    I am considering configuring the hyper-v server withe following hardware:

    (2) 10 core procs (Intel XEON E5-2630 V4 2.2 HGz, 25M Cache, 8.0 GT/s QPI, Turbo, HT, 10C/20T (85W) Max Mem 2166 MHz

    96 Gbs of 2400MT/s Dual Rank RAM,

    (1) 800Gb RAID10 with (4) 400Gb SSD SAS Mix User MLC RAID10 array 

    (2) 1.8TB RAID10 with (4) 900Gb 15K RPM conventional disks

    I have been conferring on SQL forums, and most do not have long term experience with SSD drives.  So they're recommending only placing the tempdb on the SSD array, and leaving the .mdf  and .ldf files on separate RAID10 arrays configured with conventional disks.

    I need to look at long term between 5 to 7 years with minimal interruption and performance degradation.  

    The fear is of a possible disk failure on one or more of the SSD drives past the 3 - 5 years limit with constant usage.  

    So with this in mind, I was originally going to place the SQL 2014 boot volume, .mdf, ldf. and tempdp all on the 800GB SSD array and be done with it.

    It was suggested not doing so simply because SSD drive failures do not provide advanced notice of a possible disk failure.  So, I originally thought a RAID5 SSD array, but then upgraded it to a RAID10 to provide more protection from a possible single drive failure.

    So, it was suggested that still might be very risky, and suggest only placing the tempdb to be safest approach protecting from hard drive failure and possible downtime in a mission critical environment.

    I plan on allocating 64Gbs if RAM solely for the SQL instance, so that provides more than 1.5 time the database size.  I figure if the whole active database can reside in memory that would provide the greatest speed advantage.

    I know one other SQL dba that acquired a physical server with all SSD drives, and states the performance difference compared to conventional disks is night and day.  However, from a long term perspective, I do not feel comfortable with the unknown of SSD disk reliability over mission critical production over an extended 5 to 7 year period.  

    Can you shed some light on this topic?


     



    • Edited by cmp119 Tuesday, February 21, 2017 7:14 PM
    Tuesday, February 21, 2017 7:14 PM

All replies

  • >However, from a long term perspective, I do not feel comfortable with the unknown of SSD disk reliability over mission critical production over an extended 5 to 7 year period.  

    Easy.  Plan for disk failure.  Switch to FULL recovery, and take database and log backups to the other drives, and (less frequently) to Azure.  In case of a disk failure, take a tail log backup (if you can), and the restore your databases to the non-failed (or new) drives.

    Your RTO is a couple of hours, tops, and your RPO is defined by the log backup frequency.

    BTW we are still learning about the reliability characteristics of flash storage as these devices mature and gain wide adoption.  It's quite likely that your new flash-based storage will be _much_ more reliable than your current spinning disk media.

    David


    Microsoft Technology Center - Dallas
    My blog






    Tuesday, February 21, 2017 7:27 PM
  • Hi cmp119,

    Firstly, the performance gain with SSD is significant and that gives you a lot reasons to migrate to a SSD based system. Paul S Randal has blogged about this in 2010 and you could expect better result in 2017.

    Secondly, unlike traditional hard-drive, your workload characteristic does impact on SSD lifespan since it’s limited by P/E cycle. That means you really need to calculate for the SSD replacement/life cycle based on your current workload. 

    Thirdly, make sure you choose big brand, enterprise-level product for more reliability.

    If you have any other questions, please let me know.

    Regards,
    Lin

    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Wednesday, February 22, 2017 2:18 AM
    Moderator

  • I plan on allocating 64Gbs if RAM solely for the SQL instance, so that provides more than 1.5 time the database size.  I figure if the whole active database can reside in memory that would provide the greatest speed advantage.

    Well if such is the case where  RAM size is greater than DB sizes combined then the disk speed would not matter much assuming during long run most of the DB pages would be in memory. But this config cannot be scaled in case when you have DB in 800 +G or TB.

    SSD have been reliable to me, may be I am lucky. I you ask me moving to SSD will greatly enhance your perforamce


    Cheers,

    Shashank

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My TechNet Wiki Articles

    MVP


    Wednesday, February 22, 2017 7:14 AM
    Moderator