Why Citrix + AHV provisioning path and management is superior to VMware Horizon

In this post, I’ll contrast and compare the different management and provisioning path architectures between Citrix on Nutanix AHV using Machine Creation Services (MCS) and two leading VMware Horizon options. While there is always numerous options within deployments the examples here will be based on the best and leading alternatives. I’ve prepared a 5,000 and 25,000 user examples to illustrate how a common sized environment would look versus one at a larger scale. This will display the difference in how things scale and whether complexity increases or remains low.

The reason to look at this is to help understand how failures, patching, upgrades and human error might affect the resiliency of the provisioning path and management interface. If the control plane is down for the underlying hypervisor the VDI broker layer will not be able to provision or manage the desktop VMs. This can have serious implications for users as they may be unable to access resources if they are disconnected or logoff and when they return there are not enough available desktops due to a control plane issue.

On the operations side, this is an important discussion also, because organizations demand simplicity in architectures. They do not want solutions that are complex to set up and maintain. So I will also look at how many management interfaces the alternatives impose on admins and point any areas of concern.


Citrix + AHV 5,000 User Example

In the first example, we are looking at 5,000 XenDesktop users deployed on Nutanix AHV hypervisor. XenDesktop communicates directly to the AHV cluster via the Prism cluster IP address and utilized API calls to perform actions. Prism is the distributed management interface and runs as a service in the Nutanix controller VM (CVM) on each node. This means that Prism is always available during upgrades and should a node, CVM or a service fail one of the other nodes will accept incoming connections to Prism and API calls.

In the sample diagram below I’m showing XenDesktop connecting to a single AHV cluster running all 5,000 desktop VMs. This is to showcase the power and flexibility that AHV and Prism provide. AHV does not have a maximum cluster size limit like legacy hypervisors impose. With Prism running on every node in the cluster the management and provisioning operations for VMs and the cluster scale out linearly with the cluster. This means that there is no difference in the performance of provisioning or management operations whether a cluster is 3 nodes or 80 nodes. This allows architects to design for large clusters when applicable without any concerns over imposed cluster size limitations.

Should there be valid reasons the 5,000 desktops could be split into more than one cluster. Reasons for doing so might be workloads that don’t mix well or adversely affect desktop density or the desire to divide into distinct failure domains.


  • No Single Point of Failure (SPOF) for provisioning or management
  • Node or VM counts do not limit cluster sizes
  • Linear performance of control plane
  • Highly available control plane and provisioning path
  • Simple architecture that easy to deploy, manage and operate


  • VMware Horizon does not support AHV

VDI provisioning path.001


VMware Horizon 5,000 user example

In this first VMware Horizon example, we are looking at the classic way of deploying vCenter server. This scenario does not matter if you deploy Windows or vCenter appliance variations. In this classic method vCenter is a single point of failure (SPOF). This means that the environment can be severely impacted during upgrades and failures that take vCenter offline for more than a few minutes.

Another significant constraint to call out is that VMware does not recommend building blocks of infrastructure that host more than 2000 desktops. This means that each block will consist of a vCenter server and one or more vSphere clusters. In our 5,000 user example, this architecture forces us to have 3 vCenters and the number of clusters below them is open to how the architect wants to design based on requirements. By limiting the scale of each vCenter, VMware is keeping the performance and responsiveness within acceptable limits. But this approach, when scaled becomes inefficient because you are using additional resources and the number of items to manage and update continues to scale as you add users.


  • Fairly simple to deploy and is well understood after long VMware history
  • Widely supported by applications


  • vCenter is Single Point of Failure (SPOF)
  • vCenter is limiting factor of 2,000 desktops per vCenter
  • VMware composer SPOF for linked clone provisioning

VDI provisioning path.003


VMware Horizon 5,000 user example w/vCenter HA


This example is just an alternative to the previous one in that I’ve inserted the new vCenter High Availability (HA) option that was released in vSphere 6.5 recently. The vCenter Server Appliance (vCSA) must be utilized if you want to use this HA option. The sizing and architectures are the same. The primary difference is the availability of vCenter in this alternative. To deploy the vCenter HA config you are required to deploy 3 vCSA VMs for each vCenter that you want to be highly available. There will be an active, passive and witness VM in each deployment. Multiply this out with the three blocks required to deliver 5,000 users and we now have nine vCenter appliances to deploy, manage and upgrade.
This adds a lot of complexity to the architecture for the benefit of increasing the resiliency of the provisioning path and management plane.
  • HA option provides resiliency for vCenter features
  • Complex to deploy, manage & understand
  • vCenter HA option uses lots of resources with 3x virtual appliances each
  • Unclear how vendor plugs may work in this architecture
  • vCenter is still limiting factor for 2,000 desktop VM limit per vCenter
  • As design scales complexity increases by having so many management points

VDI provisioning path.004


Citrix + AHV 25,000 User Example

In this and the following examples, I have now scaled the number of users to 25,000 to see what effects this has on the different architectures and management experience. For the Citrix and AHV architecture, nothing changes here other than the number of users. Citrix can accommodate the large number of users within a single deployment. On the AHV cluster side of things, I have elected to evenly divide the users between four different clusters. I could have chosen a single cluster but that felt extreme, architects can also choose more clusters if that meshes with their requirements. Within Citrix Studio, each AHV cluster will be configured as an endpoint that can be provisioned against.

The point is that in the architecture organizations can accommodate large numbers of users with a small number of clusters of which all benefit from highly available provisioning and management controls. Each AHV cluster can be managed via the Prism interface built into the cluster or a Prism Central can be deployed to allow for global management and report. An important thing to note is that Prism Central is not in the provisioning path so does not have any effect on our architecture explained earlier.


  • No cluster size limits provides flexibility to account for budget savings and ability to meet requirements.
  • Highly available architecture at all levels with simplicity baked in.
  • Small number of clusters reduces node counts by saving on the number of HA nodes for additional clusters.
  • Global management functions without affecting provisioning redundancy via Prism Central.
  • Single Citrix deployment and management point for all users.


  • VMware Horizon cannot benefit from AHV

VDI provisioning path.002


VMware Horizon 25,000 User Example

Now taking a look at the expanded user environment with VMware Horizon architecture you can see that I’m showing the vCenter HA alternative. I think that if you have the option for a highly available control plane most will select that option so I’m not showing the classic single vCenter option.

The architecture is the same but you will notice a few things now that the user count has been scaled up to 25,000. We can no longer deliver that many users from a single Horizon installation (Pod). The maximum users within a pod are 10,000 so we now require three Horizon installs to meet our user counts. To be honest having three Horizon pods does affect the broker management experience but in this scenario has really no bearing on the cluster count or design.

Following the 2,000 users per vCenter rule we will need 13 vCenters to meet our 25,000 user requirement. To keep things clean the diagram shows just a single cluster attached to each vCenter but the 2,000 users could be split between a few clusters under each vCenter if that made sense.

You can see from the diagram that deploying 13 vCenters in HA configuration requires 39 vCenter appliances to be deployed and configured. Yes that’s right, Thirty-nine!! Just think about the complexity this adds to troubleshooting and upgrades. Each one of those appliances must be upgraded individually and within a short window to not break functionality or support. Upgrades now may force you to upgrade Horizon, Horizon agents, clients, vCenter and vSphere all within a single weekend. That’s a lot of work, best you could do is do one of the pods per weekend and now you’re exposing your staff to three weeks of overtime and loss of their weekends.


  • HA option provides resiliency for vCenter features


  • Crazy Complex to deploy, manage & understand at this scale
  • vCenter HA option uses lots of resources with 39 virtual appliances being deployed
  • Unclear how vendor plugs may work in this architecture
  • vCenter is still limiting factor for 2,000 desktop VM limit per vCenter
  • Three vCenter linked mode view to see entire infrastructure view
  • Three different Horizon management consoles to configure and control users
  • Composer is an SPOF for linked clone provisioning per Horizon Pod

VDI provisioning path.005


VMware Horizon 25,000 users on VxRail

In this last example, we are going to adjust the previous example and look at what would change if it was deployed on VxRail appliances that utilize VSAN for storage. The Horizon and vCenter / vSphere architecture would be the same the only thing to highlight is what is added by VxRail.

Each of the clusters that provide resources for each 2,000 user block would be a VxRail cluster. These clusters have a VxRail virtual appliance VM that runs on it and is used for appliance management and upgrades. Given this scale, we now see that each of the 13 clusters will have its own dedicated VxRail manager and does not offer a global management function that Prism Central offers. VxRail manager is not in the provisioning path, but does add to the complexity of managing this type of deployment and should be considered before selecting.


  • Same as previous example


  • Same as previous example
  • 13 Different VxRail managers adds needed complexity
  • VxRail is an SPOF as a single VM running on each cluster for management operations
  • VxRail includes additional software that potential exacerbates this further. (Log Insight, ESRS, etc.)

VDI provisioning path.006



Just to wrap up my thoughts and examples here is that whether you’re designing a small or large scale VDI environment it’s important to understand how the management and provisioning structures will function. These are important to how highly available the solution is and what level of effort will be required to support it from day 2 and on. The resiliency and simplicity that Citrix offers when connect to Nutanix AHV cannot be rivaled by any other alternatives today.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Nutanix announces acquisition of PernixData and Calm.io

There has been plenty of guessing and leaked stories over the past month about Nutanix acquiring PernixData and today all of these stories can be forgotten and a bunch of new ones can start. These new stories will range from these are great moves to extend the features and lead that Nutanix has, to oh this is terrible for PernixData. All I can say is that I’m personally excited to see the PernixData team joining Nutanix, while some people have left already we wish them the best. I look forward to the many new teammates and working with them on exciting new projects. Our TME team is growing with an addition from the PernixData family as well as many others in different roles.



I as well as many others have been a fan of PernixData for several years now. In my previous role, I had a chance to work with the product from its Beta period and as it matured as a solution. I’m personally excited to watch how both teams will merge into one and use the collective brain power to improve the performance of Nutanix platform and FVP. Something that is probably more exciting for me is going to be watching how the Architect analytics solution that Pernix built is used. Being a big fan of great analytics I have lots of ideas on how this could be used with existing and future capabilities being built.

In the press briefing, Nutanix CEO Dheeraj Pandey spoke about some details around what attracted him to PernixData and some thoughts on what is planned.

Server-side storage technologies are going to become 100x faster with Intel’s push towards NVMe and storage-class memory. While we bring applications and data together in same servers, we have to hustle hard to get even closer to the CPU, and yet remain hypervisor-agnostic. SAN arrays, sitting over networks, will look archaic with the mass introduction of 3D-XPoint technologies.


PernixData helps us hug the application even more so. Hyperconvergence remains hypervisor-agnostic, as we place acceleration and data services strategically on the server.


Over the last 4 years, Pernix has built a very strong muscle memory around storage-class memory better than any other software startup that we know of. More importantly, they “see” every IO without compromising on data consistency. That vantage point gives them a unique advantage to pull off online application migration, the 1-click delight that Nutanix has always espoused for.
The two teams will work hard to make the acceleration engine and their datacenter analytics product work across multiple hypervisors, making our enterprise cloud operating system portable to multiple customer environments. And that deliberate parity between VMware ESXi and our very own AHV hypervisor will keep us authentic.



I personally have had very little time to dig into what Calm all has to offer and I missed an internal briefing due to some PTO time. But from exploring their website and watching some videos, I’m excited at the possibilities they bring around extending our cloud operations functions for workflows, deployments, scaling and many other important functions as organizations move towards cloud like operation.

The enterprise cloud operating system is not complete without an equal emphasis on 1-click automation and orchestration. Developers increasingly want to meld with ITOps, and want to think top-down about applications rather than infrastructure. While we’ve successfully sold to ITOps and application administrators in the last 4 years, it is time for us to go higher up the value chain, even closer to end-users, i.e., DevOps.


Calm was a Sequoia Capital-funded startup that was thinking deep about applications and visual design of app workflows. Everything in Nutanix has been about top-down design — Prism’s design has humanized infrastructure, the way we’ve thought of extremely mundane things in IT. That is exactly what we found in the Calm team — democratizing orchestration without writing too much code. Best part, its vision is extremely aligned with ours on the hybrid nature of future IT. Their pane of glass helps design and orchestrate applications across AWS, Azure, and on-prem web-scale infrastructure. The lightbulb moment came when we saw how elegantly they had integrated Nutanix into their story. It was more than 8-9 months of courtship, when we went deep to understand Aaditya, Jasnoor, and their brainchild called Calm.io.


With Calm integrated, customers will finally be able to choose the right cloud for the right workload, achieve seamless application mobility while experiencing the same simple, delightful, and consistent experience they have come to respect about Nutanix.


Press Release

Here is the official press release:

Nutanix Announces Two Strategic Acquisitions

PernixData and Calm.io Augment Data and Control Fabrics

of the Nutanix Enterprise Cloud Platform


San Jose, CALIFORNIA – August 29, 2016 – Nutanix, a leader in enterprise cloud computing, today announced that PernixData and Calm.io will join the Nutanix family.


Nutanix has executed a definitive agreement to acquire PernixData, a pioneer in scale-out data acceleration and analytics. The transaction is subject to customary closing conditions. In addition, Nutanix has closed the acquisition of Calm.io, an innovator in DevOps automation.


By adding world-class technology, products and engineering talent, Nutanix can accelerate the delivery of an Enterprise Cloud Platform that rivals the agility, automation and consumer-grade simplicity of the public cloud but with the control, security and attractive long-term economics of on-premises infrastructure. These additions will enable Nutanix to pioneer new software stacks for storage-class memory systems, enhance its Application Mobility Fabric (AMF) with cross-cloud workload migration and bring rich, cloud-inspired orchestration and workflow automation to its Prism management software.


The New Data Fabric for the post-Flash Era


Nutanix and PernixData share an architectural design philosophy that next-generation datacenter fabrics must keep data and applications close in order to drive the fastest possible performance and to deliver flexible, cost-effective infrastructure scaling. With this common vision, the two companies will develop an advanced data stack to replace traditional storage silos and high-latency networks with newer storage-class memory and advanced interconnects. These planned strategic investments in new server and storage technologies will provide customers with a re-imagined data fabric for a post-flash era of enterprise computing.


The combined teams will also focus on reducing the inertia of application data that inhibits workload mobility across virtual and cloud environments. Planned enhancements to Nutanix App Mobility Fabric (AMF) will deliver the flexibility to run any application in any environment, without business-critical data being held hostage to a legacy infrastructure.


“PernixData software has helped hundreds of customers virtualize their applications without compromising performance and visibility,” said Poojan Kumar, CEO and co-founder, PernixData. “With highly aligned cultures, ambition and talent, we are genuinely excited to join the Nutanix team. And, with our common devotion to 100% software-driven solutions, will look forward to helping customers accelerate their journey to the Nutanix Enterprise Cloud Platform.”


Unleash DevOps in the Enterprise Cloud

The Calm.io and Nutanix teams will work to bring an application-first approach to choosing, managing and consuming IT infrastructure – enabling customers to pick the right cloud for each application. Nutanix plans to add cloud automation and management capabilities to its existing software stack to deliver application and service orchestration, runtime lifecycle management, policy-based governance, comprehensive reporting and auditing services to support all application environments, including virtual machines, containers and microservices. Together, Calm.io and Nutanix plan to bring together clouds, platforms and people, on an elegantly simple pane-of-glass.


“We have shared a similar vision as Nutanix since day one – datacenter infrastructure must be fully automated, simple to deploy and easy-to-use,” said Aaditya Sood, Calm.io CEO and founder, “We are excited to join the Nutanix team to work together to eliminate the daunting complexity of legacy datacenters by taking a radical, application-centric view of IT infrastructure.”


“Today is a very special day in Nutanix’s history,” said Dheeraj Pandey, Founder, CEO and Chairman, Nutanix. “PernixData and Calm.io both have exceptional technology, solid engineering teams, and visionary leaders with the ‘Founder’s Mentality’; they have dreamt big and persevered against great odds to build phenomenal products. We are honored to welcome them into the Nutanix family, and build the next generation of innovative products and truly helping our customers realize the vision of the Enterprise Cloud.”


PernixData and Calm.io customers can expect further communication in the following weeks.



About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

You wanted the best and you got the Best! – Nutanix on UCS

Today marks a proud day for Nutanix and our customers. As we further extend our lead in the hyperconverged space, it is now fully supported to deploy the Nutanix platform on Cisco UCS servers. Customers now have an additional hardware option to choose from. The current options are NX on Supermicro hardware, XC on Dell hardware as OEM relation or HX on Lenovo hardware as OEM relationship. Outside of these Nutanix currently offers software-only deployments on Crystal ruggedized hardware, open compute project (OCP) hardware and now Cisco UCS.


The hottest platform in the world, Nutanix on UCS!

Offering the stability, reliability and performance of the Nutanix platform on Cisco UCS has been a regular request from many of our large customers and partners. Customers no longer have to accept other hardware platforms if they are heavily invested in UCS or deploy half-baked or immature HCI solutions that were previously available on UCS.


Nutanix on Cisco UCS

Starting today customers can deploy Nutanix on UCS through a meet our meet in the field process. This allows customers to purchase Cisco UCS servers through their normal channels and maintain their Cisco UCS relationships. The hardware and software will be deployed at the customer’s location using the standard Nutanix procedures. The foundation process has been updated to support UCS hardware.

In this initial phase of UCS support, Nutanix will be supporting the C220 and C240 rack mount servers. There will be two models of the C240 to allow for the use of 2.5″ or 3.5″ drives. Also we support deploying with or without Fabric Interconnects (FI), this allows maximum flexibility. These models and config to order flexibility will cover the vast majority of existing use cases. Nutanix will take first call on all support issues and if determined it’s a hardware issue can open a support case with Cisco for customer via TSAnet. Hardware alerts, we can open CiscoTAC cases via TSAnet for customers.



When deployed with Fabric Interconnects, the foundation process will auto create the necessary identify pools, service profile templates and templates to allow for the normal automated Nutanix deployment process that has been available for years on other hardware platforms.


Misc. Faqs

Here are several more details about the release that I won’t dive into at this time.

  • Hypervisor Support ESXi 6.0/5.5, AHV and Hyper-V
  • Regular and self-encrypting drives supported on C240 with 3.5″ drives
  • Haswell and Broadwell CPU’s supported


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Managing VMs on Nutanix AHV

In this post I will take a quick tour through the process of managing virtual machines running on Nutanix Acropolis Hypervisor (AHV). This will touch on the basics and the most common tasks that admins are performing on VMs that already exist. All VM management tasks will be performed from within Prism the single management interface for Nutanix clusters. The VM view within Prism is where the majority of VM-based tasks are performed. A table based view provides an easy to consume method of finding and managing your virtual machines. When selecting a VM, just below the table will be a group of actions that can be taken. These are highlighted in the image below.

These actions are as follows:

  • Enable NGT – Nutanix guest tools, provides VirtIO drivers, Nutanix VSS provider and other services. Similar to VMware tools for vSphere people.
  • Launch Console – Opens a console session to the VM
  • Power Actions – Power off, on, restart actions
  • Take Snapshot – ad-hoc snapshot of the VM
  • Migrate – Use to live migrate a VM to a different host
  • Pause – pause a running VM
  • Clone – Creates a new VM from the existing VM
  • Update – Change VM settings
  • Delete – Delete a VM

manage vm actions


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Migrate VMs from vSphere to Acropolis Hypervisor – AHV

When it comes to building out a new Acropolis (AHV) cluster you are either starting out with new VMs or looking to migrate existing VMs from a vSphere environment. I have covered the process of creating new VMs in the “Creating VMs on AHV” post in this series. But if you are moving from vSphere to AHV, then there are two challenges that must be planned for. The first one is what will the conversion process look like and second, what will the migration process look like. With each of these are multiple choices and it will be up to the project team to decide which is the best choice for the project. There are likely other options that I might not mention here either.

Converting vSphere VMs

The task of converting a vSphere VM to a VM that runs on AHV is not that different than other hypervisors or even the P2V days when you were moving from physical servers. You have a VM that is in a vSphere format and must be converted to a different format. It will also have VMware tools and drivers installed into the operating system. After the conversion these will need to be cleaned up, just like they would be when migrating to other platforms.

For this there potentially two viable candidates. The first would be grabbing the virtual disk file (VMDK) from a datastore and using the image management feature in AHV to import and convert the disk. There would still be cleanup to be done, but it’s a simple one-off way. This method would not be ideal for doing a bunch of VMs, it however might be a simple method if you just have a hand full of template VMs that you want to populate on the new cluster.

The second and leading choice is to use the following method. In short with this method you are installing the VirtIO drivers in advance, think of them as the drivers that VMware tools install. Storage vMotion the VMs to a shared Nutanix datastore, power off the VM and create a new VM on AHV using the vDisks that were moved over. A fellow Nutant and VCDX has already created a detailed blog post about converting a vSphere VM to run on AHV. You can read the full details here as written by Artur Krzywdzinski.

In the future I would like to see something like Double-Take offer a solution that would offer help with these cross-platform conversions and migrations. This is something they have working for cloud migrations today. I see this becoming a common task in the future and demand is only going to increase for organizations wanting to move workloads between different resource pools whether they are on-premises or off-premises and very few of these moves will support a single VM format.


Migrating the VMs

The conversion process was just covered for what is available at the time this was published. I also touched on the option of sharing out a datastore from the AHV cluster to an existing vSphere cluster. This would allow you to Storage vMotion any VMs to that shared datastore before shutting them down to complete the conversion process. This method will likely be the most heavily used, since its familiar for other legacy types of migrations between old and new infrastructure.

A second option is available if both the vSphere and AHV clusters are running on Nutanix gear. You can create a Protection Domain (PD) on the vSphere cluster to replicate all of the VMs to the AHV cluster. This will get the data over to the AHV cluster in an efficient manner, this would be a great option if moving between sites also.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 3123