Some of the characteristics of a cloud make it a great solution for customers but introduces some new challenges on the management side. Much like back in the virtualization days the hypervisor abstracted the hardware from the virtual servers, cloud is now abstracting the hypervisor from the tenants of the cloud. This as of today is by design and creating an environment that is easy to use also removes many of the management or provisioning choices from the users.
But by doing this you are also limiting some of the features or performance decisions that users may need or demand out of a cloud design. I think today that the features are not yet there within most cloud platforms to help users make performance related decisions easily or be able to create complex custom virtual machines. Let’s talk through a few of these items below and why I think Tintri is best positioned to solve many of these items today.
Complex Cloud VM’s
So what do I mean by a complex cloud VM. The best example that I can give for this scenario is a database server. The typical config that follows best practices is to separate the various parts of the database server onto separate virtual disks that are preferably on discrete LUNs that meet the performance characteristic of each part of the database. This would mean that the OS drive, database drive, temp DB drive and logs drive would all be on individual virtual disks and would be matched up with a datastore backed by the recommended Raid level. Along with selecting the right raid level would be to consider the number and type of disks used in these disks.
Today vCloud and I would say most other cloud stacks do not fully account for the performance requirements for these types of complex workloads when being provisioned from a catalog or at all. There will probably be some people that will say this is possible in vCloud but not without manual intervention from an admin in vSphere. My point is this process should be hidden from the cloud consumer and happen when they self-provision a VM.
The approach that many typical arrays take would be to provision Tiered storage and present it as an Org Virtual Data Centers (vDC) in vCloud. Then force the user to select a tier which to place the entire VM on when provisioning, and this is certainly a viable option. You could use different types of storage like an all Flash based array or something that auto tiers storage blocks on a scheduled basis to try and create pools of storage. While these are valid options I think they add extra complexity to things and not a very granular approach, meaning you usually have to make performance based decisions on an entire LUN or pool of storage.
Now you are probably wondering how is Tintri going to make this any simpler for me. I will not give you the entire Tintri value pitch as that’s not my job, you can review their features here. In a vCloud design specifically, Tintri solves this complex VM issue by offering you a storage appliance that is VM-aware. First Tintri uses it’s finally tuned methods to serve nearly all of your IO (both reads and writes) from its flash storage. This is very helpful because you will have clear visibility into which virtual machines are using higher amounts of storage IO even down to the individual virtual disk. And the icing on the cake is that since Tintri is completely VM-aware, if you needed to guarantee the highest performance for a specific VM or virtual disk you have the ability as an admin to pin the VM or disk to flash, thus guaranteeing the best performance for the virtual machine. I received a reply from Tintri that explains while possible to manually pin VMs to flash, under normal use this is not need as their sub VM QoS along with the flash hit percentage they can provide all workloads the throughput, IOPS and low latency they need. Today this type of granularity is not possible from other storage vendors.
So I tried to find a graphic that would visualize the awesome NFS performance that Tintri can provide, but when you search for NFS the results are heavily polluted with Need for Speed images. So who does not love a good burnout picture?
Why NFS is just a solid choice for vCloud
OK so it’s not like Tintri is the only NFS player in the market. But by choosing NFS as their storage protocol I think that they match up well with creating clean vCloud designs. Much with the discussion above about creating Tiers of storage for performance you will also most likely be creating multiple datastores if you are using block based storage. The need for multiple datastores could be driven by decisions and factors such as multiple tenants in your cloud and limiting factors such as size or number of VMs per datastore.
NFS allows you to be more flexible by using fewer larger exports or in Tintri’s case a single NFS datastore per Tintri appliance. This reduces waste by not having to cut up your usable space into so many smaller parts for various design decisions. Now there are sure to be some valid reasons that you would need to have those divisions and Tintri cannot solve everything. But for many of the clouds being built today by corporations that are mostly private in nature tend to benefit from what Tintri could offer to their design.
Oh and not to forget the benefit that NFS by default is thin provisioned and saves space on your storage without having to use Fast Provisioning in vCloud that does not fit well with all designs.
To wrap all this up you should have these types of conversations with your VMware partner, vendor or internal teams to evaluate what might be the best solution based on the requirements of your cloud design. But no matter what your needs are, I think that you should take a serious look at what Tintri might be able to help you solve in the areas of performance and ease of management.
Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design