My VSAN home lab configuration
The excitement around VMware VSAN continues to grow and I expect to see its release or the announcement of the release data on March 6 at a webcast VMware has scheduled. I’ve been working on expanding the home lab and one candidate for the new expansion was to get a small VSAN cluster setup and do some testing.
My goal for this setup is to get comfortable with VSAN, test how stable and resilient it is and maybe see if it has a long term fit in the home lab. I’ve been thinking that if I’m pleased with the results this could be my long term management cluster for the lab. If not it will be used as a target cluster to EUC and Cloud solutions that I work with. Either way I think that VSAN will likely have a home in the lab going forward.
Home Lab VSAN Cluster
To build the cluster I am using a group of three HP DL365 G5 servers that I recently acquired. Each server has a pair of quad core AMD CPU’s and 32GB of memory. The servers have only 6 drive bays so I am some what limited to the size of VSAN that I can deploy. But I’m not too worried about that since this is more about testing the technology and seeing how it might help in my lab.
For my build each server will have a single SSD drive and four spinning disks. This will meet the requirements and give me a bit of capacity to play with also.
My VSAN Configuration
I wanted to share with others what my configuration is like so that if you are considering building VSAN at home there starts to be some history that others can review. For me the DL365 servers that I’m using do not have a supported disk controller in them. My build will be using the HP P400i Raid card that is in the servers, it does not have a pass thru or JBOD mode, so its not ideal. But it will work for my testing, not sure how it will affect the performance.
Each server will boot from a USB stick for the ESXi install and I will utilize the drives for the VSAN disk groups. Each server will have a disk group that has a 60GB SSD SATA drive and four 72GB 10K SAS drives. The SSD is a SATA3 disk but the controller is not SATA3 so I will be loosing a bit of performance there also.
To start my networking for each server is just a pair of 1GbE connections with a single vSwitch on each host. These connections will handle all management, vMotion, VSAN and VM traffic. In later tests I will be adding more uplinks and will see if a pair of dedicated links for VSAN help the performance any.
Since there is no pass thru mode I have to configure each drive as a Raid 0 logical volume with a single disk. I took a bad picture of the configuration in the image below. Forgive me as I was in my poorly lit basement. Another item is that the P400i controller in this setup does not pass along to vSphere that my drive is an SSD. So I have to manually go in on each host and flag the SSD drive so that vSphere knows which one was the flash drive.
The following image shows the configuration for each logical drive in the P400i setup screen. I used the HP Array Configuration Utility (ACU) for this as the boot setup option has less options. I picked a small block size for my test, not yet sure if that was the right choice only further testing will help figure that out. The other major item was that I turned off the Array Accelerator feature for each of the Raid 0 volumes. Since what VSAN wants is a JBOD configuration I don’t want the Raid card interfering at all.
Initial Testing Results
So for those of you that I have not put too sleep by now are probably wondering how did this perform. I did just a small test sampling so far and was pretty happy with the results given my basic configuration with not ideal hardware.
The first test was the Max IO test from VMware IO analyzer. I just wanted to see what the max read performance I could drive from this setup. It was a single VM running on one of the hosts and it was able to drive over 14,000 IOPS. I would say that is pretty good, but using the same model SSD on one of my white box servers using PernixData I was able to get 50,000 IOPS with the same test. So I’m wondering how much the disk controller is affecting performance here or is it just VSAN.
The second test was the SQL 16K workload that I used to drive both read and writes. This would test the config for sure given that it was just not a small set of read data. The image below shows the results I monitored in ESXTOP and also the network traffic that the vmnic was passing during the test. These results were OK with total IO passing 3,000 at some points with about a third of that being writes. For a home lab this is not too back and I would guess with a new controller and more disks I could do much better.
Overall I’m pretty happy with the initial tests and plan to spend more time experimenting with VSAN. I might look for a different disk controller that I could use in place of the HP raid controller that I’m stuck with for now.
Have something to add to this post? Share it in the comments.