Designing vSphere clusters to use as vCloud director provider vDCs

Posted by on March 29, 2012 in vCloud, VMware | 0 comments

In talking with different customers I have been getting questions about how would clusters be sized and utilized in a vCloud environment. Before we get too far into the details I will give a brief overview on how capacity is presented to vCloud.

When setting up resources or capacity within vCloud you have to first create Provider Virtual Datacenters (vDC). These provider vDCs map back to either a single vSphere cluster or resource pool. This means that depending on your decision to scale out or up you will end with either large pools of capacity or more smaller ones. There are many reasons that you might do either option so I won’t go into that discussion.

At the end of the post I have described the three options for allocating resources to Organization vDCs. These provide you with different methods to assign, guarantee or over commit resources. With any of these you will need to decide how you will construct your HA clusters for vSphere that will support your cloud.

So you might be wondering what effects an HA cluster could have on your cloud. Well to give you a simple example depending on what options you use to build your HA cluster could allow tenants of the cloud to provision too many VMs and upon a failure all VMs might not be able to restart, or when provisioning a new vApp the VM would not be able to start because it failed admission control. These are just some of the things to think about along with what level of failure will you design for. Do you need to be able to sustain one host failure (N+1) or two host failures (N+2).

Example 1: You configure a vSphere cluster for HA and allow it to sustain up to 1 host failure or specify the percentage of cluster resources that it will reserve for spare capacity. Now both of these 2 methods do their best to guarantee there are resources available for fail over. Back in the server virtualization days you had an Admin team that managed this and the provisioning to make sure the limits are not violated. But in a vCloud deployment you will need to allocate these resources to vDCs. Depending on which allocation method you choose, you might be over committing resources or allocating 100% of them, see options at the end of this post. This could allow the cloud tenants to provision too many vApps and causing them to not start or not start after a host failure.

To avoid this issue you need to architect your clusters and vDCs paying close attention to your sizing and how you allocate the resources. Also when adding capacity to a cluster you will need to modify the settings on the vDC in vCloud to accommodate the capacity that was added to the cluster paying close attention to not get your calculations out of whack.

Example 2: This method is something that I have never been much of a fan of for straight server virtualization but sometimes use if for cloud designs. In this option a Failover host would be specified, this host would be waiting to take over for a host failure. In this design you would not be able to allocate these resources since they are not available for use. This is usually regarded as a waste when people hear that a host will be sitting there doing nothing. But it does accomplish what we are trying to do.

Now when creating your vDCs in vCloud you can allocate up to 100% of the resources since there is now dedicated failover capacity. I personally don’t like to allocate 100% even in this scenario, I like to use something in the 90-95% range which allows for a little bit of breathing room. You could also take the route of allocating less of a percentage and then using it to increase capacity as needed.

When you expand the cluster you still need to edit the vDC in vCloud to allocate the newly added capacity. But it will be a much more straightforward process. Because all you have to do is increase the resources to the level that you had previously chosen. For example we  had 4 hosts allocated at 100% and we added a 5th host, when we edit the vDC we will see the allocation down to about 80% of total resources. So all that is needed is to increase to 100% again. This makes the process of scaling a little more easy to understand.


The image below shows the different allocation models available in vCloud. Each offers different ways to allocate resources to Organizations but requires some different planning when making the decision. You can use a cluster to provide resources to one type of allocation method or mix and match them. But mixing allocation methods that are getting their resources from a single cluster requires you to pay close attention to how many resources are allocated to each vDC.

Pay as you go model:

With this model you can set the amount of CPU and Memory that you will guarantee to the each VM. This allows you to over commit resources if you desire. There are not any resources committed to an Organization until a vApp is deployed. The image below shows the available resources and what you are going to guarantee. Also gives an estimate of how many virtual machines could be created with these settings.

Allocation model

In this model you are allocating an amount of CPU and Memory resources to an Organization in vCloud, but you do not have to guarantee them anything. For example you could allocate them 5GHz of CPU but only guarantee 50% of that amount to them. This allows for overcommitment if desired.

Reservation model

In the last model you are reserving capacity to an Organization. Much like the allocation method you allocate CPU and Memory, but this model guarantees all the resources.



About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Leave a Reply

%d bloggers like this: