Home and lab network upgrade with Ubiquiti gear

Recently one of my lab switches began to fail, since it was the one that did most of the routing in my setup it was time to reevaluate my home networking design. I could just pick up another layer 3 switch, drop it in and continue to do the same thing as I was. But I’m always looking to do things better and my current setup was using gear from multiple vendors. I was using Meraki for my firewall and Access Points (AP), HP was my 1GbE networking and routing and Quanta for 10GbE networking. This setup worked fine, but

I was using Meraki for my firewall and Access Points (AP), HP was my 1GbE networking and routing and Quanta for 10GbE networking. This setup worked fine, but obviously there was many different touch points, I would have loved to replace the HP switch with one from Meraki but they are pretty expensive so that was out of the question. Also, I don’t like paying the yearly licensing costs to Meraki but had been doing for a few years because I really liked the features.

So this led me to take another look at Ubiquiti for networking gear, I have seen lots of others express their happiness with the products after using them. So rather than paying for more Meraki licenses in 6 months, I choose to invest that future money and a little more to replace most of my network with Ubiquiti gear. I ended up replacing everything but the Quanta switch that does 10GbE networking.

The new network now uses the Security Gateway (SG) as my edge firewall and router for all traffic. The SG connects to the new 1GbE network switch with is POE capable so it will power the new AP that was deployed also. I use 1GbE for older lab servers and some IPMI connections and then have a trunked connection to my Quanta switch that newer lab hosts connect to. With this setup I now can control all networking expect the Quanta from the single Ubiquiti controller that I deployed on a Windows VM in the lab.

Slide2

While I’m losing a few features that Meraki offered and I used they are things that I can deal with. It’s only been a short period of time but so far I’m pretty happy with the Ubiquiti products and hope they live up to their high praise.

Lessons Learned

I had never used Ubiquiti gear before so there were a few things that I learned while setting up and fighting through some things in the beginning. The first would be to just go ahead and install the Unfi controller software in a VM or an old laptop that will always be on and connected. Install the controller on your laptop is not a great idea if you are not always home and online. The devices hold their configuration but cannot be changed if the controller is not present. You also cannot access the reporting if the controller is not around.

The AP’s are all POE capable which is nice if you do not have power outlets close by where you want to deploy them. They come with an AC adapter or can be powered by a POE capable network switch like the one I purchased. By default the UBNT switch is set to have all ports POE+ enabled, but when I plugged in the AP it would not power up. I tried different cables and nothing worked till I used the AC adapter. After talking to support I found out that you must change the switch port that it’s connected to from POE+ to 24v passive, not sure why this matters but it did the trick. Seems weird that an all Ubiquiti deployment would not power up the AP’s with default settings.

The last weird thing I encountered was that when using my Macbook the performance was not great. It was not obvious when using a browser or even streaming video, but was very obvious when I would RDP to servers in the lab. It would have lots of pauses when click between tabs and apps in the RDP session. If I would keep a ping running to different IPs in the lab I would see random spikes of latency from 15-300ms and a ping that would drop about every 20-30 packets. What was weird is that if I performed the same operations from a PC it worked flawlessly. So off to search the intertubes and I saw that Mac performance on Ubiquiti has been an on and off problem with different firmware versions. There was a bunch of forum posts about the problem and after reading a bunch of them I saw that people were having good luck with running the previous firmware versions on their AP’s.

So I left the controller and switch on the latest firmware versions but downgraded the AP to 3.4.18 and it fixed the Mac performance issues. Immediately after the older firmware was installed and the AP rebooted I had a completely normal experience when performing the same RDP functions.

It’s only been a few days but after working through the issues I’m now pretty happy with my decision to make the switch to Ubqiuti. Now I wish they offered cost-effective 10GbE switches that I could deploy to replace my Quanta and then the setup would be ideal.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

What is the entry cost for Nutanix Community Edition – CE

Last week Nutanix officially announced the Community Edition of their platform. They let everyone know that a private Alpha and Beta test period had already been completed and was starting a public Beta period. The public Beta will allow users to sing up and Nutanix will open up the Beta to a new set of testers each week. This will allow them to onboard new testers each week without over burdening the process and causing confusion.

The Community Edition (CE) is a software-only version of the Nutanix hyperconverged platform. It currently only runs on KVM for the hypervisor and must be installed on bare metal hardware. This left some confused and wondering what the cost of testing this community version would be. I was happy to see that there are options for testing a single node CE install along with a 3 or 4 node install. The fact that a single node install is allowed offers a pretty low cost of entry. This is the route that I choose for my testing since many of my lab servers are AMD based. I did have an Intel-based server and a few whiteboxes that were potential candidates for the testing.

Following some of the conversations online about CE, it seemed like some felt that the cost of entry for playing with CE was too high. While I do agree that a nested version that could be deployed as a virtual appliance is on my wishlist, the bare metal option is nice also. I’m planning on using CE as a long term storage option in my lab over the Nexenta box that I was previously running.

Now on to the costs. I used an HP ML150 G6 server that I purchase about two years ago for around $400. It has just a single Intel E55xx series CPU in it and 24GB of memory. The built in NICs are supported by CE as well as the storage controller. I had a consumer grade Samsung 830 SSD drive and a pair of 2TB HDD’s that I was using in my old build. So the total build for me was probably between $700 to $1000 at most. I wanted to test on my whitebox build that is around $500 but a failed CPU is causing a delay.

I did a little Ebay searching today and saw that Dell R610 and R710 servers are pretty cheap on average, around $350 – $500 with the right CPU’s and amount of memory. All that you would need to do is add the right drives if they don’t have them already. So I think that this is a pretty reasonable cost for an advanced product. I know many people’s home servers may already easily meet these requirements.

Long term I’m going to be thinking about how I can design CE to be my main storage in the home lab. It will likely be a non-standard approach of running CE on KVM and presenting it externally to my vSphere clusters. But if it gets the job done I’m fine with the behavior. I may use a pair of single node installs and replicate between them for my data protection strategy saving me from burning a 3rd server. This should save me from spending several thousands of dollars on a higher-end Synology NAS device and all the drives. Also gives me some cool features that a home NAS won’t provide.

If you are running CE and have a reasonably priced build drop it in the comments and share with others.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

My VSAN home lab configuration

The excitement around VMware VSAN continues to grow and I expect to see its release or the announcement of the release data on March 6 at a webcast VMware has scheduled. I’ve been working on expanding the home lab and one candidate for the new expansion was to get a small VSAN cluster setup and do some testing.

My goal for this setup is to get comfortable with VSAN, test how stable and resilient it is and maybe see if it has a long term fit in the home lab. I’ve been thinking that if I’m pleased with the results this could be my long term management cluster for the lab. If not it will be used as a target cluster to EUC and Cloud solutions that I work with. Either way I think that VSAN will likely have a home in the lab going forward.

 

Home Lab VSAN Cluster

To build the cluster I am using a group of three HP DL365 G5 servers that I recently acquired. Each server has a pair of quad core AMD CPU’s and 32GB of memory. The servers have only 6 drive bays so I am some what limited to the size of VSAN that I can deploy. But I’m not too worried about that since this is more about testing the technology and seeing how it might help in my lab.

For my build each server will have a single SSD drive and four spinning disks. This will meet the requirements and give me a bit of capacity to play with also.

HPdl365g5-vsan

VSAN home lab

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Building new whitebox servers for VMware home lab

I have needed to get some more capacity added to the home lab for a while now, but have taken my time. I have been gathering up enterprise servers that are a couple of generations old in the past. These have always done me well but have limited amount of memory in them and upgrading them was pretty expensive, not to mention they are very loud. So I decided to go another direction and build a couple of whitebox servers based on common desktop parts. I’ve been watching for sales and collecting the parts to build them. After finding a couple of good deals lately I finally had all the parts need to build two hosts.

Another thing that I had to make a decision on was if I needed a server class motherboard or would a desktop one work. After thinking about it I came to the decision that a desktop motherboard would work just fine and probably save me a few dollars in the build cost. I almost never use the out of band management access to the enterprise servers that I had at this point and since they are just down in the basement, I can easily run down and access them if needed.

I also did not need the ability to use VT-d so a server board was even less important. I simple needed hosts with good power and more RAM. It really comes down to memory for me, I needed the ability to run more VMs so that I don’t have to turn things on and off.

The Why:

This type of lab is important to me for personal learning and testing out configurations for the customer designs that I work on during the days. I have access to a sweet lab at work but it’s just better to have your own lab that you are free to do what you want, and my poor bandwidth at the house makes remote access kind of poor.

I want the ability to run a View environment, vCloud suite and my various other tools all at once. With these new hosts I will be able to dedicate one of my older servers as the management host and a pair of the older servers as hosts for VMware View. This will leave the two new hosts to run vCloud suite and other tools on.

The How:

I have set the hosts up to boot from the USB sticks and plan to use part of the 60GB SSD drives for host cache. The remaining disk space will be used for VMs. Each host will have 32GB of RAM, this is the max that the motherboard will support with its 4 slots. There is an onboard 1GB network connection that is a Realtek 8111E according to the specs. I can report that after loading vSphere 5.1 the network card was recognized and worked without issue. I had a couple of gigabit network cards laying around that I installed for a second connection in each of the hosts.

The case came with a fan included, but I added another for better cooling and air flow. Even with multiple fans running the hosts are very quiet since there are no spinning disks in them and put out very little heat. I could have probably reduced the noise and heat a bit more by choosing a fan less power supply but they are over $100 and was not a priority for me.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

How does FreeNAS perform with VMware vSphere

I was talking to a co-worker who was kicking around the idea of using FreeBSD and ZFS for shared storage in his home lab. I thought it sounded decent but never imagined that it could squeeze a ton of IOPs out of some old hardware. So too make my life easier since I’m no Linux geek I elected to run FreeNAS 8 which is the same setup, but wrapped up in a nice package with a web GUI. Perfect for a former Windows geek.

Now I never really had very high hopes on getting much performance out of the test server that I would be using. And after eating some Chinese takeout, you can see the fortune that I got was telling me not to get my hopes up.

So I dug up one of my old servers that I had since retired after vSphere 5 was released. It is a 64bit machine but does not have VT compatible CPU’s so it does not offer much help as a vSphere  host any longer. But it was the best candidate for this storage experiment. The server is an IBM x346 server with 2 Dual Core Xeon CPU’s and 6GB of RAM. I am using just one of the onboard 1Gbe connections for Ethernet.  For disks I have four 72GB U320 10K drives, of which one is used for the OS install and the other three will be used for a ZFS volume. Yes that’s right I am going to use just 3 x 10K SCSI drives for this NAS box. I know what you are probably thinking. A picture of this awesome 6+ year old machine is shown below.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
%d bloggers like this: