How does FreeNAS perform with VMware vSphere

Posted by on April 18, 2012 in Home Lab, Labs, VMware | 2 comments

I was talking to a co-worker who was kicking around the idea of using FreeBSD and ZFS for shared storage in his home lab. I thought it sounded decent but never imagined that it could squeeze a ton of IOPs out of some old hardware. So too make my life easier since I’m no Linux geek I elected to run FreeNAS 8 which is the same setup, but wrapped up in a nice package with a web GUI. Perfect for a former Windows geek.

Now I never really had very high hopes on getting much performance out of the test server that I would be using. And after eating some Chinese takeout, you can see the fortune that I got was telling me not to get my hopes up.

So I dug up one of my old servers that I had since retired after vSphere 5 was released. It is a 64bit machine but does not have VT compatible CPU’s so it does not offer much help as a vSphere  host any longer. But it was the best candidate for this storage experiment. The server is an IBM x346 server with 2 Dual Core Xeon CPU’s and 6GB of RAM. I am using just one of the onboard 1Gbe connections for Ethernet.  For disks I have four 72GB U320 10K drives, of which one is used for the OS install and the other three will be used for a ZFS volume. Yes that’s right I am going to use just 3 x 10K SCSI drives for this NAS box. I know what you are probably thinking. A picture of this awesome 6+ year old machine is shown below.

Ok, enough rambling about my setup and more about how the test was run. So I loaded FreeNAS 8 the 64bit version on the server. I used one of the drives for this. The 3 remaining disks were used for the ZFS volume. This netted me about 130GB of usable space. I used a Raid-Z group on the ZFS volume. I then created a LUN of 20GB and presented it via iSCSI to one of my ESXi 5.0 hosts. On this host I loaded the VMware Fling IO Analyzer 1.1 to do the workload generation. I presented the 20GB volume to the IO Analyzer appliance as a physical RDM and let it do all the work.

The results were pretty impressive, out of this old junker of a box I was able to pull 6255 IOPs. I pretty much fell on the floor when I looked at the report. It appears that ZFS does an awesome job of using the 6GB of memory in the server to do some impressive caching of IO because those 3 disks should only be able to do about 450 IOPs combined. While the test was running I kept an eye on the TOP statistics in FreeBSD and the CPU hovered around 40% used when under load. Not bad for those old processors. A screen shot of the TOP statistics is shown below.

So the last thing you are probably thinking about is dang, if this old beater can turn out 6255 in IO what could a newer box do? Well thanks to the help of a co-worker we have an idea of this. Mike Mills ran a similar test on a newer and better equipped server. His test server is a HP DL385 G2 with dual AMD opteron 2218 CPU’s and 32GB or memory. He setup 4 network adapters, one for management and three for iSCSI traffic in a iSCSI portal in ZFS. The drives that he used were 8 x 72GB SAS drives that produced a 451GB ZFS volume. He built this with FreeBSD 8.2 and setup ZFS on his own because he is a Linux geek.

Now from the image below of the report that was pulled from IO Analyzer you can see that his setup pulled a little over 15,000 IOPs from this setup. That is very impressive for a used server with a few drives and less than $1,000 invested. Oh and there are no SSD’s in either of these configs. So with a few more drives and some extra memory he was able to more than double my results.

What does this all mean? I don’t know but if you are looking to build some NAS storage for your VMware home lab I think you should take a serious look at the ZFS option. Both of these were done with no tweaking just out of the box settings. If you had the budget to include some SSD’s, I bet the performance could be amazing. I know that I’m going to be looking for some 146GB drives to put in my server and use it for the lab on a permanent basis.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Solutions Architect for a VMware partner and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status for 2013, 2012 & 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

2 Comments

  1. Brian, can you give more detail or screen shots of you ZFS disk configuration ? How are the disks connected ? Are the setup on a raid card ? Etc…

    Very interesting work.

    Regards

    Steve

    • I have torn down this test machine so I cannot take any screenshots. Hope to build a better version sometime soon.

      I can tell you that the disks were just connected to the onboard ultra 320 SCSI card in the server. From what I understand ZFS does not like it when there is a Raid card controlling the disks, unless the card can operate in JBOD mode.

Leave a Reply

%d bloggers like this: