Sunday, January 06, 2013 10:04 PMWe plan to deploy SQL Server 2008 R2 Standard Edition (15-20 named instances) in a Failover cluster setup on Windows 2008 R2 Ent. Edition. This is for a shared environment where each instance will be reserved for a separate client.
I would like to get clarity before proceeding with the actual installation.
1- Is there any documentation from Microsoft that provides detailed guidance on the subject?
2- What unique resources are required for each named instance? I understand that we need a unique network and IP for each instance. Do we need separate Disk resource also for each instance?
3- How is memory allocated to each of the instance? Configuring the Max server memory setting will be enough?
Monday, January 07, 2013 4:25 AMAnswerer
What you're looking at doing might not be the best approach. While it *may* certainly work (depending on planning and methodology) I'm not sure all of the possible scenarios have been thought through.
To answer your questions:
1. Yes, there is (including sub topics): http://msdn.microsoft.com/en-us/library/ms189134(v=sql.105).aspx
1.a. The pattern is the same for every clustered instance after the first assuming windows server failover clustering is already installed and configured.
2. Depending on how you setup your environment this will vary, however at the minimum you'll need: 1 disk, 1 IP, 1 unique name that will be tied to the IP. Yes, you'll need at least one unused disk resource per instance. I'll talk about this more below.
3. Memory is allocated from the node that the active instance is currently running on. Thus, if all instances are active on a single node, all memory will be allocated for that. Remember that multi-page allocations come from outside the buffer pool in 2008 R2 and below and are not counted toward the max server memory setting. You'll want to set max and min server memory according to your worst case scenario (IMHO) situation.
Additional item for you to consider:
Since you'll want (let's go with the upper limit of your post) 20 instances on the same cluster, no doubt you'll still want to have different disks handling different parts of the database structures (log, primary filegroup, backups, etc) which will have to be done through mountpoints. The reason is that in windows server 2008R2 and below there are a maximum of 26 possible disk drive letters. Assuming A, B, and C are used or reserved by default and assuming that an optical drive exists which we shall call D that leaves a maximum amount of 22 disks. Compound this with what the quorum model will be and it would be possible to have a 5th lost to a disk witness (again, this depends on quorum). So with a maximum of 21 letters now, you can have a maximum of 21 instances since each requires it's own root volume. If mount points were not used, the number of insatnces goes down dramatically based on how many are used. Keep in mind that a SQL Server instance on a cluster can only "see" and "use" the disk resources that it depends on (as in the dependancy properties of the WSFC resource in the resource group). This means that thought must be put into a disk strategy before hand as to not hit any limitations in terms of disk resources and any cross instance dependancies.
On the subject of memory, as I stated above both min and max server memory should be set to the worst case scenario for min server memory and teh best case scenario within reason for max server memory. If this is not suitable for your environment then I would not look at a single large cluster to be a possible solution. I'm not in your environment and I don't know to what specifications or requirements you are building or looking to build this out against, but the worst case scenario in my honest opinion must be looked at and weighed by management. Since WSFC is a shared nothing model, the owner of the resource that is currently running the resource is where the memory will be allocated from. If all 20 instances are on a single node that has 80 GB and they are all set for a min server memory of 0 and a max server memory of 40 GB then issues are sure to arise.
The last item is bandwidth and throughput of your network. Running 20 instances on possibly a few or a single node depending on worst case scenario and quorum model may erquire a significant amount of network IO, especially if something such as FCoE or iSCSI is used which should have their own dedicated network and would be required per server. This could make your network extremely complicated and messy. All testing for this and normal load for SQL Server client requests should be tested or simulated before equipment is purches and should be gone over carefully with your network infrastructure team.
I'm sure there are a few other items to hammer out such as port ranges, firewall rules, etc and how that's going to play into an environment as such but those are for the most part out of score until the base is agreed upon and laid out.
Friday, January 11, 2013 9:34 PM
I would just like to add to Seans comprehensive reply. I agree with him that having a large number of instances on a single cluster is not necessary a good move in most situations (you would be better splitting into multiple smaller number of nodes clusters). But if you do pursue a single cluster model with a large number of nodes (and instances) then you should look to logically restrict the possible movement (and therefore coexistance) of instances onto nodes to prevent a high number of possible running instances on the same physical node.
You can do this with really two techniques. The first is to only create the SQL FCI and add those nodes that you want the FCI to be able to run on. The second is to remove servers from the possible owner list of the IP resource of the SQL FCI. On a failure, the SQL FCI will failover to any possible nodes.
Regards,Contact me through (twitter|blog|SQLCloud)
Please click "Propose As Answer" if a post solves your problem
or "Vote As Helpful" if a post has been useful to you
Watch my sessions at the PASS Summit 2012