New storage features in vSphere 5.1

With the announcement of vSphere 5.1 at VMworld yesterday some more detailed information is becoming available around the new storage features included in vSphere 5.1.There are some really great stuff on this list that has not gotten much notice yet. I really like that many of these features are now supported in both View and vCloud director.

  1. VMFS file sharing limits – The previous limit for number of hosts that could share a single file was 8 for block storage, in 5.0 it was raised to 32 hosts for NFS datastores. In vSphere 5.1 the limit for block storage or VMFS datastore has been raised to 32 hosts also. This is compatible with both VMware View and vCloud director, the primary use for this is linked clones.
  2. VAAI updates – The use of VAAI NAS based snapshots for vCloud director is now available, this was previously available to only View in vSphere 5.0. This allows hardware based snapshots for faster provisioning of linked clones in both products.
  3. Larger MSCS clusters – The previous limit of 2 node MSCS clusters has been raised to allow for up to 5 node MSCS clusters with vSphere 5.1
  4. All Paths Down update – The timing out I/O on devices that enter into an APD state has been updated to address hostd from being tied up.
  5. Storage Protocol enhancements – The ability to boot from software FCOE was added to vSphere 5.1.  Jumbo frame support has been added for all iSCSI adapters with UI support. Full support for 16Gb Fibre Channel HBA adapters that can run at full speed. In vSphere 5.0 16GB adapters could be used but had to run at 8Gb speeds.
  6. Storage IO control (SIOC) updates – SIOC will not figure out the best latency setting for your datastore as opposed to the manual setting in vSphere 5.0. By default SIOC is not turned in a Stats only mode so that it wont take any action but will be collecting stats for your before you configure  settings.
  7. Storage DRS (SDRS) enhancements – vCloud director can now use SDRS for initial placement of Fast provisioning linked clones for managing free space and IO utilization. This is an update from the previous free space method that vCloud 1.5 used and had no support for SDRS.
  8. Storage vMotion enhancements – Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

VMware vSphere designing with VAAI in mind

With the release of vSphere 4.1 last summer VMware customers were provided several new features. Many of these new features were created to lower the burden on the ESX host by being more efficient or offloading the work to something outside of the Virtualization stack. The overall goal of the new features was to continue to improve performance of virtual machines. The one that I am writing about today is VAAI or vStorage API for Array Integration. I wanted to write about how using VAAI in your Architecture Designs is changing the way you are creating environments.

The goal of VAAI is too offload some of the storage focused activities that VMware had previously handled to your storage array. This was accomplished by VMware working closely with the major storage vendors. The idea of VAAI was first announced back at VMworld 2008 and finally came to market when vSphere 4.1 was released. By offloading these storage functions it has reduced the load on the ESX(i) hosts and also increased the performance of these activities by letting the storage array do the work that it was created to do.

In the current offering of VAAI there are 3 functions that have been offloaded. In future product releases it is expected that VMware will continue to work with storage vendors to increase the features of VAAI and the currently available features are explained below.

Full Copy – So you’re probably wondering how this feature is going to help me. I can think to two VMware functions that this VAAI feature provides upwards of 10x speed improvements in. The first would be when you are deploying a VM from a template. We will say for example that you are going to deploy a 50 GB VB. When the VM is deployed vSphere is going to read the entire 50 GB and then write the 50 GB for a total of 100 GB of I/O. With VAAI enabled and a storage array that supports VAAI this process creates very little I/O at the host. The vSphere host will send a command to the storage array that say make a copy of this VM and name it this. The copy will be done locally on the storage array and results in very little I/O between the host and array. Once completed the array will send a notice to the host to let it know the works was completed.

The second VMware feature to benefit from this is a Storage vMotion. I feel that this is where this really pays off because you are most likely moving a larger chunk of data with this command. For example sake let’s say we are going to move a 100 GB Virtual Machine from one disk to another. To do this in the past this would have caused 200 GB of I/O on the host. With VAAI the burden on the host is almost nothing as this work is done on the storage array.

Hardware assisted locking – Too allow multiple hosts in your cluster to talk to the same storage volume VMware would lock the volume when one of the VM’s needed to write to it. This locking is to prevent another host from trying to write to the same blocks. This was not a large issue If you were using smaller volumes with only a handful of virtual machines on them.  Now with VAAI the locking has been offloaded to the storage array and it’s now possible to lock only the blocks that are being written to. This opens up the possibility to use larger volumes and increase the amount of VM’s that can be run on a single volume.

Block Zeroing – This feature is saving vSphere from having to send redundant commands to the array for writes. The host can simple tell the storage array which blocks are zeros and move on. The storage device will handle the work without needing to receive repetitive write commands from the host.

So now that you should have an understanding of what VAAI is and how it should help free up resources. I will now talk about how this changes the way we should be thinking about different design considerations.

The first thing that comes to mind is that I can now think about using larger datastores without the worry of affecting performance due to locking issues. With VAAI the storage device is going to handle the locking and allow me to have far for VM’s per volume than the 5-10 previously held as a guideline to live by in past versions. It’s now possible to have 50+ VM’s on a single volume if you had a valid reason to.

The next thing that came to mind is that I will be able to achieve higher consolidation ratios on vSphere hosts if needed. Due to the savings in CPU, Network and Storage I/O overhead we can use that savings to host more virtual machines on each host. In particular if you are using Blade Chassis you can expect to see a lot of network I/O traffic savings since you can have up to 16 blades in a chassis depending on your vendor. That can equal to a huge decrease in traffic flowing through those shared ports.

Something that I was wondering about and saw little discussion about was what type of extra load does VAAI functions place on the array. I reached out and asked this question to Chad Sakac of EMC and Vaughn Stewart of NetApp. Both contacts replied back via Twitter and stated that currently VAAI adds little to no extra burden to the arrays and in coming firmware updates it’s expected to be even less.

Lastly to sum up what you need to take advantage of VAAI. You will need to have vSphere 4.1 and you need to be licensed for Enterprise or Enterprise Plus. Next you must have a storage array that supports VAAI, this is probably going to be the largest hurdle for most. If you array was purchased within that last 2 years there is a good change that with a firmware upgrade your array may support VAAI. If not you will need to purchase a new one and this is an expensive investment. So it’s conceivable that many smaller shops will never get to reap the benefits of VAAI because of these requirements.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
%d bloggers like this: