Infinio Accelerator install and walk through
I was recently provided the opportunity to work with the Infinio Accelerator product. In case you have not heard of or gotten the chance to look into Infinio I will give a quick summary. The accelerator is a software only server side caching product that will help accelerate the reads from your central storage.
In the current version of Infinio it supports NFS only storage. It uses a management VM and an accelerator VM on each host. The accelerator VM uses one vCPU and 8GB of memory. Since the solution uses excess host memory there are no requirements for local SSD.
The product is pretty darn easy to install and get working, which you will see from my walk through below. Besides the ease of install the licensing cost of the solution is affordable also. Infinio licenses the product on a per socket license and the retail price is $### per socket. This puts the cost at about $1000 per average server used in most configurations. Also a large majority of shops do not run their hosts very hot so having 8GB of memory available won’t be an issue either.
Infinio Accelerator Install
The first thing to do is download the install bits from Infinio which weigh in just over 1GB. One you have the software ready there will be an .OVA file and a setup.exe. You need to start the install by running the executable, if you just deploy the OVA it will deploy but you will be missing some important configuration options.
Upon starting the setup wizard you will be presented with the first screen shown below. This explains the steps that you will go through for the setup and asks you to accept the License Agreement.
Up next in step 2 is to provide the details for connecting the Infinio management VM to your vCenter. Very straight forward, this is so the management appliance can discover your hosts, networking and datastores. As well as perform the later configuration updates necessary for the setup.
Now it presents the option to choose the first datastore that you want to accelerate. The install scanned your vCenter and will present you with a list of the available NFS datastores. You will pick one to start and can accelerate others once the install is complete.
Next up in step 4 is to validate that your hosts have available resources for the accelerator VMs to deploy. The required resources are pretty low, just 8GB, 1 vCPU and 15GB of local disk space per host.
The next step shown below will allow you to view the details about the validation of resource requirements. You will see which hosts meet or fail the resource validation.
Up next in the sixth step your will make the decisions on where to deploy the management VM. You simply need to select the host, datastore and network port-group.
Up next you will need to provide the username and password details for accessing the management VM. This will be used when accessing the web management portal.
The last step here is to provide network details for the management VM. You need to provide the hostname for the VM, make sure to register it with your DNS server. You will also need to provide the IP address, which can be static or DHCP.
Last up is the setup confirmation screen. This shows you the selections and details that your provided during the setup. If everything is to your liking you can start the install.
Once the install starts it will begin by deploying the OVA for the management VM. Then configure it with IP information and then deploy the acceleration nodes to any hosts selected.
Once the setup is completed you will see a confirmation screen like the one shown below. It will show you whether the steps were successful and how you can reach the management console.
At this point the initial datastore that we selected to be accelerated is ready to go. The read traffic is now being cached and you should begin to see improvement in read performance almost immediately.
How did it perform
I only tested Infinio in my home lab and only used a single host for the test. I accelerated an NFS datastore on an old Iomega IX2 NAS device. The IX2 is a two drive unit with SATA drives, so the performance is pretty crappy. The results were pretty impressive, it helped the read performance a great deal which allowed the write performance to marginally improve also.
I ran a read test with 4K block size and was able to get a max of 11 – 12,000 IOPs out of the wimpy IX2. That is pretty impressive. Some limiting factors in the test would be that I only used a single host and one VM to generate the workload. If I had setup a larger cluster and accelerated all the hosts the cache size available would be 8GB times the number of hosts. This would allow for more workloads to be served out of cache.
About Brian Suhr
Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design