VMware vSphere designing with VAAI in mind

Posted by on March 2, 2011 in VMware, vSphere, vStorage API | 0 comments

With the release of vSphere 4.1 last summer VMware customers were provided several new features. Many of these new features were created to lower the burden on the ESX host by being more efficient or offloading the work to something outside of the Virtualization stack. The overall goal of the new features was to continue to improve performance of virtual machines. The one that I am writing about today is VAAI or vStorage API for Array Integration. I wanted to write about how using VAAI in your Architecture Designs is changing the way you are creating environments.

The goal of VAAI is too offload some of the storage focused activities that VMware had previously handled to your storage array. This was accomplished by VMware working closely with the major storage vendors. The idea of VAAI was first announced back at VMworld 2008 and finally came to market when vSphere 4.1 was released. By offloading these storage functions it has reduced the load on the ESX(i) hosts and also increased the performance of these activities by letting the storage array do the work that it was created to do.

In the current offering of VAAI there are 3 functions that have been offloaded. In future product releases it is expected that VMware will continue to work with storage vendors to increase the features of VAAI and the currently available features are explained below.

Full Copy – So you’re probably wondering how this feature is going to help me. I can think to two VMware functions that this VAAI feature provides upwards of 10x speed improvements in. The first would be when you are deploying a VM from a template. We will say for example that you are going to deploy a 50 GB VB. When the VM is deployed vSphere is going to read the entire 50 GB and then write the 50 GB for a total of 100 GB of I/O. With VAAI enabled and a storage array that supports VAAI this process creates very little I/O at the host. The vSphere host will send a command to the storage array that say make a copy of this VM and name it this. The copy will be done locally on the storage array and results in very little I/O between the host and array. Once completed the array will send a notice to the host to let it know the works was completed.

The second VMware feature to benefit from this is a Storage vMotion. I feel that this is where this really pays off because you are most likely moving a larger chunk of data with this command. For example sake let’s say we are going to move a 100 GB Virtual Machine from one disk to another. To do this in the past this would have caused 200 GB of I/O on the host. With VAAI the burden on the host is almost nothing as this work is done on the storage array.

Hardware assisted locking – Too allow multiple hosts in your cluster to talk to the same storage volume VMware would lock the volume when one of the VM’s needed to write to it. This locking is to prevent another host from trying to write to the same blocks. This was not a large issue If you were using smaller volumes with only a handful of virtual machines on them.  Now with VAAI the locking has been offloaded to the storage array and it’s now possible to lock only the blocks that are being written to. This opens up the possibility to use larger volumes and increase the amount of VM’s that can be run on a single volume.

Block Zeroing – This feature is saving vSphere from having to send redundant commands to the array for writes. The host can simple tell the storage array which blocks are zeros and move on. The storage device will handle the work without needing to receive repetitive write commands from the host.

So now that you should have an understanding of what VAAI is and how it should help free up resources. I will now talk about how this changes the way we should be thinking about different design considerations.

The first thing that comes to mind is that I can now think about using larger datastores without the worry of affecting performance due to locking issues. With VAAI the storage device is going to handle the locking and allow me to have far for VM’s per volume than the 5-10 previously held as a guideline to live by in past versions. It’s now possible to have 50+ VM’s on a single volume if you had a valid reason to.

The next thing that came to mind is that I will be able to achieve higher consolidation ratios on vSphere hosts if needed. Due to the savings in CPU, Network and Storage I/O overhead we can use that savings to host more virtual machines on each host. In particular if you are using Blade Chassis you can expect to see a lot of network I/O traffic savings since you can have up to 16 blades in a chassis depending on your vendor. That can equal to a huge decrease in traffic flowing through those shared ports.

Something that I was wondering about and saw little discussion about was what type of extra load does VAAI functions place on the array. I reached out and asked this question to Chad Sakac of EMC and Vaughn Stewart of NetApp. Both contacts replied back via Twitter and stated that currently VAAI adds little to no extra burden to the arrays and in coming firmware updates it’s expected to be even less.

Lastly to sum up what you need to take advantage of VAAI. You will need to have vSphere 4.1 and you need to be licensed for Enterprise or Enterprise Plus. Next you must have a storage array that supports VAAI, this is probably going to be the largest hurdle for most. If you array was purchased within that last 2 years there is a good change that with a firmware upgrade your array may support VAAI. If not you will need to purchase a new one and this is an expensive investment. So it’s conceivable that many smaller shops will never get to reap the benefits of VAAI because of these requirements.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Leave a Reply

%d bloggers like this: