Why Citrix + AHV provisioning path and management is superior to VMware Horizon

In this post, I’ll contrast and compare the different management and provisioning path architectures between Citrix on Nutanix AHV using Machine Creation Services (MCS) and two leading VMware Horizon options. While there is always numerous options within deployments the examples here will be based on the best and leading alternatives. I’ve prepared a 5,000 and 25,000 user examples to illustrate how a common sized environment would look versus one at a larger scale. This will display the difference in how things scale and whether complexity increases or remains low.

The reason to look at this is to help understand how failures, patching, upgrades and human error might affect the resiliency of the provisioning path and management interface. If the control plane is down for the underlying hypervisor the VDI broker layer will not be able to provision or manage the desktop VMs. This can have serious implications for users as they may be unable to access resources if they are disconnected or logoff and when they return there are not enough available desktops due to a control plane issue.

On the operations side, this is an important discussion also, because organizations demand simplicity in architectures. They do not want solutions that are complex to set up and maintain. So I will also look at how many management interfaces the alternatives impose on admins and point any areas of concern.

 

Citrix + AHV 5,000 User Example

In the first example, we are looking at 5,000 XenDesktop users deployed on Nutanix AHV hypervisor. XenDesktop communicates directly to the AHV cluster via the Prism cluster IP address and utilized API calls to perform actions. Prism is the distributed management interface and runs as a service in the Nutanix controller VM (CVM) on each node. This means that Prism is always available during upgrades and should a node, CVM or a service fail one of the other nodes will accept incoming connections to Prism and API calls.

In the sample diagram below I’m showing XenDesktop connecting to a single AHV cluster running all 5,000 desktop VMs. This is to showcase the power and flexibility that AHV and Prism provide. AHV does not have a maximum cluster size limit like legacy hypervisors impose. With Prism running on every node in the cluster the management and provisioning operations for VMs and the cluster scale out linearly with the cluster. This means that there is no difference in the performance of provisioning or management operations whether a cluster is 3 nodes or 80 nodes. This allows architects to design for large clusters when applicable without any concerns over imposed cluster size limitations.

Should there be valid reasons the 5,000 desktops could be split into more than one cluster. Reasons for doing so might be workloads that don’t mix well or adversely affect desktop density or the desire to divide into distinct failure domains.

Pros:

  • No Single Point of Failure (SPOF) for provisioning or management
  • Node or VM counts do not limit cluster sizes
  • Linear performance of control plane
  • Highly available control plane and provisioning path
  • Simple architecture that easy to deploy, manage and operate

Cons:

  • VMware Horizon does not support AHV

VDI provisioning path.001

 

VMware Horizon 5,000 user example

In this first VMware Horizon example, we are looking at the classic way of deploying vCenter server. This scenario does not matter if you deploy Windows or vCenter appliance variations. In this classic method vCenter is a single point of failure (SPOF). This means that the environment can be severely impacted during upgrades and failures that take vCenter offline for more than a few minutes.

Another significant constraint to call out is that VMware does not recommend building blocks of infrastructure that host more than 2000 desktops. This means that each block will consist of a vCenter server and one or more vSphere clusters. In our 5,000 user example, this architecture forces us to have 3 vCenters and the number of clusters below them is open to how the architect wants to design based on requirements. By limiting the scale of each vCenter, VMware is keeping the performance and responsiveness within acceptable limits. But this approach, when scaled becomes inefficient because you are using additional resources and the number of items to manage and update continues to scale as you add users.

Pros:

  • Fairly simple to deploy and is well understood after long VMware history
  • Widely supported by applications

Cons:

  • vCenter is Single Point of Failure (SPOF)
  • vCenter is limiting factor of 2,000 desktops per vCenter
  • VMware composer SPOF for linked clone provisioning

VDI provisioning path.003

 

VMware Horizon 5,000 user example w/vCenter HA

 

This example is just an alternative to the previous one in that I’ve inserted the new vCenter High Availability (HA) option that was released in vSphere 6.5 recently. The vCenter Server Appliance (vCSA) must be utilized if you want to use this HA option. The sizing and architectures are the same. The primary difference is the availability of vCenter in this alternative. To deploy the vCenter HA config you are required to deploy 3 vCSA VMs for each vCenter that you want to be highly available. There will be an active, passive and witness VM in each deployment. Multiply this out with the three blocks required to deliver 5,000 users and we now have nine vCenter appliances to deploy, manage and upgrade.
This adds a lot of complexity to the architecture for the benefit of increasing the resiliency of the provisioning path and management plane.
Pros:
  • HA option provides resiliency for vCenter features
Cons:
  • Complex to deploy, manage & understand
  • vCenter HA option uses lots of resources with 3x virtual appliances each
  • Unclear how vendor plugs may work in this architecture
  • vCenter is still limiting factor for 2,000 desktop VM limit per vCenter
  • As design scales complexity increases by having so many management points

VDI provisioning path.004

 

Citrix + AHV 25,000 User Example

In this and the following examples, I have now scaled the number of users to 25,000 to see what effects this has on the different architectures and management experience. For the Citrix and AHV architecture, nothing changes here other than the number of users. Citrix can accommodate the large number of users within a single deployment. On the AHV cluster side of things, I have elected to evenly divide the users between four different clusters. I could have chosen a single cluster but that felt extreme, architects can also choose more clusters if that meshes with their requirements. Within Citrix Studio, each AHV cluster will be configured as an endpoint that can be provisioned against.

The point is that in the architecture organizations can accommodate large numbers of users with a small number of clusters of which all benefit from highly available provisioning and management controls. Each AHV cluster can be managed via the Prism interface built into the cluster or a Prism Central can be deployed to allow for global management and report. An important thing to note is that Prism Central is not in the provisioning path so does not have any effect on our architecture explained earlier.

Pros:

  • No cluster size limits provides flexibility to account for budget savings and ability to meet requirements.
  • Highly available architecture at all levels with simplicity baked in.
  • Small number of clusters reduces node counts by saving on the number of HA nodes for additional clusters.
  • Global management functions without affecting provisioning redundancy via Prism Central.
  • Single Citrix deployment and management point for all users.

Cons:

  • VMware Horizon cannot benefit from AHV

VDI provisioning path.002

 

VMware Horizon 25,000 User Example

Now taking a look at the expanded user environment with VMware Horizon architecture you can see that I’m showing the vCenter HA alternative. I think that if you have the option for a highly available control plane most will select that option so I’m not showing the classic single vCenter option.

The architecture is the same but you will notice a few things now that the user count has been scaled up to 25,000. We can no longer deliver that many users from a single Horizon installation (Pod). The maximum users within a pod are 10,000 so we now require three Horizon installs to meet our user counts. To be honest having three Horizon pods does affect the broker management experience but in this scenario has really no bearing on the cluster count or design.

Following the 2,000 users per vCenter rule we will need 13 vCenters to meet our 25,000 user requirement. To keep things clean the diagram shows just a single cluster attached to each vCenter but the 2,000 users could be split between a few clusters under each vCenter if that made sense.

You can see from the diagram that deploying 13 vCenters in HA configuration requires 39 vCenter appliances to be deployed and configured. Yes that’s right, Thirty-nine!! Just think about the complexity this adds to troubleshooting and upgrades. Each one of those appliances must be upgraded individually and within a short window to not break functionality or support. Upgrades now may force you to upgrade Horizon, Horizon agents, clients, vCenter and vSphere all within a single weekend. That’s a lot of work, best you could do is do one of the pods per weekend and now you’re exposing your staff to three weeks of overtime and loss of their weekends.

Pros:

  • HA option provides resiliency for vCenter features

Cons:

  • Crazy Complex to deploy, manage & understand at this scale
  • vCenter HA option uses lots of resources with 39 virtual appliances being deployed
  • Unclear how vendor plugs may work in this architecture
  • vCenter is still limiting factor for 2,000 desktop VM limit per vCenter
  • Three vCenter linked mode view to see entire infrastructure view
  • Three different Horizon management consoles to configure and control users
  • Composer is an SPOF for linked clone provisioning per Horizon Pod

VDI provisioning path.005

 

VMware Horizon 25,000 users on VxRail

In this last example, we are going to adjust the previous example and look at what would change if it was deployed on VxRail appliances that utilize VSAN for storage. The Horizon and vCenter / vSphere architecture would be the same the only thing to highlight is what is added by VxRail.

Each of the clusters that provide resources for each 2,000 user block would be a VxRail cluster. These clusters have a VxRail virtual appliance VM that runs on it and is used for appliance management and upgrades. Given this scale, we now see that each of the 13 clusters will have its own dedicated VxRail manager and does not offer a global management function that Prism Central offers. VxRail manager is not in the provisioning path, but does add to the complexity of managing this type of deployment and should be considered before selecting.

Pros:

  • Same as previous example

Cons:

  • Same as previous example
  • 13 Different VxRail managers adds needed complexity
  • VxRail is an SPOF as a single VM running on each cluster for management operations
  • VxRail includes additional software that potential exacerbates this further. (Log Insight, ESRS, etc.)

VDI provisioning path.006

 

Conclusion

Just to wrap up my thoughts and examples here is that whether you’re designing a small or large scale VDI environment it’s important to understand how the management and provisioning structures will function. These are important to how highly available the solution is and what level of effort will be required to support it from day 2 and on. The resiliency and simplicity that Citrix offers when connect to Nutanix AHV cannot be rivaled by any other alternatives today.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Guide to understanding LoginVSI test results and how to compare

The use of LoginVSI as a VDI performance testing or validation tool has increased over the last several years. It’s really the only tool to offer these services from an independent party, so by default it’s the defacto option for vendors to showcase their solutions. Vendors use LoginVSI on a regular basis to showcase how their solution meets a common set of tests which make them a candidate to be considered for your VDI projects.

To learn the basics on how to understand the results from LoginVSI tests you can refer to a post on their blog here. It’s an older post but still pretty valid since the data points have not changed that much over the versions. The danger enters when you are looking to take testing results from multiple vendors and compare them. You simply cannot take results from different tests and compare the data points without understanding how the testing was done, what was tested and how the differences in the tests affect the results. Also what should you be aware of that might affect the results?

So in this post, I will lay out a number of items to help educate on how to better understand, compare and interpret these LoginVSI results that are published. Because while someone may be publishing a very low VSIbase number and/or high densities you need to be able to determine whether that means anything to your environment and if it’s really valid to anyone.

 

VDI Brokers

I think most people understand that comparing results from a Citrix test to a VMware Horizon test is not apples to apples. There is a certain amount of overlap that can be accounted for but to be fair you should be comparing tests with the same data points. Then there is different types of desktops, the whole persistent versus non-persistent discussion and how these apply to each other. Both VMware and Citrix each offer two different types of non-persistent provisioning options within their products now, so comparing results gets even more fuzzy. I don’t see many vendors running LoginVSI testing using persistent desktops, so that should not be much of a concern. But there are significant differences in how the different non-persistent provisioning options work that should be aware of when interpreting results.

The version of the Citrix or VMware broker should be the same or very close in the different tests that you are comparing. Along with the many provisioning options explained below the version of the broker could affect the test results if one revision provided performance improvements that another did not provide.

 

Citrix

Citrix offers two different provisioning methods for non-persistent desktops, which will be the focus for the majority of tests that you will encounter. Some vendors may provide results for both. The different options are Machine Creation Services (MCS) and Provisioning Services (PVS). In short MCS is a storage based architecture while PVS is heavily networked based using centralized caching points. Each option is explored a bit deeper in the following sections.

PVS

The PVS architecture is unique to Citrix and typically uses multiple PVS servers that are load balanced. The golden image for the VDI pool is loaded onto the PVS servers and presented as read-only. These PVS servers use memory within the server OS as a caching layering that allows commonly accessed blocks to be quickly returned to guests improving performance. Each VDI/SBC virtual machine is referred to as a PVS target and is a VM with no persistent data or OS installed. These PVS targets boot the golden image via a network connection to one of the PVS servers.

The writes for each PVS target can be cache in a number of different ways. Each method offers its pros and cons and makes comparing results invalid if the tests are run using different PVS write caching methods.

  • Cache in device RAM – Each PVS target (VDI VM) will be assigned additional memory above what your OS/image requirements are and this is memory from the physical host supplying resources to all VMs running on it. The writes for each VM will go into the assigned RAM for that target and be persisted until the session is finished.
  • Cache on device hard disk – In this option, the writes for each VM are stored on a local hard disk for each VM and this is typically the same storage that is being used for running all VDI virtual machines.
  • Cache in device RAM with overflow on hard disk – This last option is a combination of the two previous methods. It typically is configured to provide a smaller amount of RAM for caching and if writes exceed this amount during a session it will begin to use hard disk for the overflow.

** When interpreting the results you cannot fairly compare a test that uses PVS with RAM cache versus a test that uses PVS with disk cache.

** It would also be incorrect to compare a test that uses PVS with any caching method to a test that uses MCS with any caching method.

** A point to question is why any modern hybrid or all-flash storage vendor would utilize PVS with RAM cache to showcase their storage solution. This virtually removes the storage solution from the testing and does not validate their solution. PVS is a legacy solution that was designed to hide the poor performance of legacy storage arrays.

** If a vendor tested with PVS and does not specifically explain how the write cache was setup and configured, you should be suspicious and request further details. If there are performance charts look for write performance, if it’s low or near zero they are using RAM cache.

 

MCS

The MCS architecture takes a storage-focused approach to provisioning non-persistent desktops. The golden image is a shared VM that all of the virtual desktops in a pool with boot from. This golden image is read only and all read requests are provided by the storage that it’s sitting on. This is different than PVS in that there are no PVS servers that provide read requests that are cached in memory.

Until XenDesktop 7.9 all reads and writes from these desktop virtual machines were serviced by the storage they were running on. In 7.9 Citrix introduced the Cache in device RAM and Cache in RAM with overflow options for MCS also, that will use the host memory as a caching layer for writes. This now allows the same write caching options between PVS and MCS with the main difference being where the reads are serviced for the golden image.

** When interpreting the results you cannot fairly compare a test that uses MCS with RAM cache versus a test that uses MCS with disk cache.

** It would also be incorrect to compare a test that uses MCS with any caching method to a test that uses PVS with any caching method.

** A point to question is why any modern hybrid or all-flash storage vendor would utilize MCS with RAM cache to showcase their storage solution. This removes the storage solution from services all or most of the write traffic and does not validate their solution.

 

VMware

VMware also offers two different provisioning methods for non-persistent desktops, which will be the focus for the majority of tests that you will encounter.  The different options are Linked Clones and Instant Clones. As of 2016, I don’t think you will see any vendors test anything but linked clones as instant clones are a new technology that is still maturing. In the future, I would expect that many vendors will begin to provide results for both. In short, linked clones is a storage based architecture while instant clones is a new method that removes several of the large storage spikes. Each option is explored a bit deeper in the following sections.

 

Linked Clones

The linked clone provisioning method from VMware is very similar to the MCS option explained above from Citrix, but without the different write caching options. Linked clones use a golden image or replica that all read operations for the desktop pool are serviced from. Each desktop virtual machine has a delta disk and this is where all write operations are performed. This makes linked clones a provisioning method that is heavily affected by the performance of the storage platform used in your design.

There is one caching alternative available for linked clones that is called the View storage accelerator or Content Based Read Cache (CBRC). This can utilize up to 2GB of host memory to cache commonly accessed bits from the replica image for read operations.

** Take note that if testing was done using the storage accelerator as it removes some of the read operations from the storage system and cannot be fairly compared to another test that does not use the same feature.

** Likewise if you do compare results of tests and the vendor that does not use storage accelerator is able to provide better results than a vendor that does use it is something to be aware of.

 

Instant Clones

The instant clone architecture is somewhat like a modernized version of linked clones. The philosophy is similar but rather than each pool using a single replica image and pulling and reads from that single image on storage, instant clones creates a replica VM for the pool image on every host. The replica on each host has the OS booted and then placed in a stunned state. This allows each new virtual desktop when created to use the state of the replica as its starting point without the need to initially boot up the OS. This approach saves time and reduces storage peaks during provisioning and image update procedures.

For these reasons, it would not be a fair to compare a test that used instant clones to one that used linked clones. The different provision methods can dramatically affect the provisioning times and I/O behavior during steady states.

 

Windows Versions

The version of Windows is important when comparing test results. While Windows version may not be as impactful as some of the other points discussed in this post, it is still something that must be taken seriously. I think that most will agree that Windows 7 or 10 are the primary Client OS versions that are already deployed or being deployed today. Deploying Windows 10 will result in about a 10-20% reduction in user density.

 

Office Versions

This is important for when you are sizing your own environment, but since we are talking about LoginVSI testing is also very relevant to this discussion also. Different versions of MS Office can dramatically affect the performance and density of tests. You can read more about the effects of different office versions on RDSH/VDI user densities here, to save you the time reading I will summarize. Office 2010 currently offers the best user density of Office versions that are widely deployed still (although no longer in mainstream support). Using Office 2013 will result in a 20% reduction in density when compared to 2010. Office 2016 further reduces the density 5% lower than 2013 or 25% less than 2010.

As you can see that with the effects Office can have on user densities it would not be accurate to compare tests that used different Office versions. Lets look at the following scenario, the vendor you prefer has published a report that meets all of your requirements.

  • Vendor fulfills all your requirements
  • VSIbase is attractive
  • Density is 20% lower than other tests
  • But vendor is using Office 2016 in their testing
  • Same OS versions, same provisioning methods used

Based on these points I would be comfortable given that all other testing points match, the lower density can be accounted for in different Office versions.

 

CPU Generations

Just like it would not be correct to compare the towing capacities of a truck with a v6 engine to that of a v8 engine, comparing results from tests that use different Intel CPUs is also not apples to apples. In general, you should be comparing test results that showcase the same CPU generation, if that is not a possibility then you can look at results but there would be no way to account for potential differences in results.

  • Intel Ivy Bridge processors
  • Intel Haswell v3 processors
  • Intel Broadwell v4 processors

Each of these CPU generations offers a performance increase over the previous version that affects both consolidation ratios and overall performance. These performance benefits are obvious for the virtual desktops, but if you are running a software-defined storage solution or hyperconverged (HCI) solution these storage layer will also benefit from these CPU improvements.

 

Memory

Memory can be of impact on your environment, running specific combinations can result in enhanced or worse decreased performance in terms of the speed that is available (1866 vs 2133Mhz is a 20% density difference for example). This drop in memory speed is typically a result of configuring a server with too many DIMM slots populated which lowers the speed. If a configuration you are considering is using more than 512GB of host memory you should check with vendor documentation to understand what will happen to the memory speed for the proposed configuration. 

 

LoginVSI Versions

Like any software vendor LoginVSI makes changes to their software on a regular basis and these changes can affect the testing results. Different versions of the testing software could be using different applications, different testing methodologies or other factors. For these reasons, it is important to make sure that when comparing test results from different tests that they are at least on the same major version release. It would be unfair to compare a testing run using LoginVSI 3.5 to one using version 4.0. It is more acceptable to compare tests run on 4.0 and 4.5 as long as all other factors explained in this post are in alignment.

 

VM Sizes

When it comes to LoginVSI testing or just any type of VDI testing most people are seeking two primary data points. The first is storage performance which historically was the major pain point in past deployments. The second data point is user density or the number of virtual desktops per host. The reality is that LoginVSI testing is valuable but can in no way be used to tell you exactly what your environment should be sized like.

To size your environment you will need to understand your use cases and their requirements, then combine those details with performance results that were collected from your actual environment. This will provide you with actual data points that can be used for sizing and the LoginVSI results along with a skilled EUC architect can then provide customized sizing for your environment.

 

What to watch for

I’ve seen this all too often is that vendors will size the VMs that they will use for running their LoginVSI tests with for the bare minimum to pass the test. By providing the minimal amount of CPU and memory to each VM they can try and show a higher density of users to hosts while still passing the test. The danger here is that vendors that do this create a false sense of user density to confuses customers and architects. Saying that you can achieve 300 users per host while using a configuration that is not likely to be deployed into production by 99% of customers is worthless. And in doing so they are either trying a bait and switch method or proposing you dramatically undersize your design. Either of these approaches would get you thrown out of my office if I was the customer.

So when looking at published test results pay close attention to what the size of the desktops used during the testing was. A Windows 7 desktop with 1 vCPU and 1.5GB of memory may get you to pass the test results but for the vast majority of use cases is not going to provide a delightful user experience.

 

Todays VM averages

With the above discussion on what to look out for in desktop sizing, I thought it would be helpful to level set on what are acceptable sizing in 2016. Since the release of Windows 7 and even more when moving to Windows 8/10 the need for 2 vCPU for each virtual desktop is the new normal. Today when sizing you should default to 2 vCPU and only move down to 1 vCPU or increase past 2 vCPU when you have valid testing data to support the decision.

The default starting point for modern Windows versions should not go below 2GB of memory unless valid testing has been done to support the request. While 2GB is the starting point, I took an informal survey of several EUC experts and the results showed that their current default sizing for VDI is 2 vCPU and 3-4GB of memory. In the end, the amount of memory will depend on the use case requirements and the applications they are using, but these data points help provide some guidance on what is acceptable and what is not when it comes to test results.

Friends don’t let friends deploy 1 vCPU desktops

 

User Cases / Scenarios

So far I’ve covered a bunch of configuration points and hardware details that can affect test results. One of the last things to be aware of is to ensure that the tests you are comparing are using the same type of user case or scenario. These are commonly referred to as knowledge worker, task worker, developer, etc. These determine the applications and how demanding the workload will be. Obviously a developer use case is far more demanding than a task worker that commonly uses 1 or 2 simple applications. Most tests are focused on the knowledge worker use case.

 

Scaling out designs

Typically you will see vendors that are testing in a range of 200 to 1000 desktops in a test run. There may be a few tests that use larger quantities but are not as common. The main thing to look for here is does the vendor provide a detailed explanation of how you would scale from the tested amount to the amount that your end state design is projected to be. As an example if I am looking at a test for 1000 desktops, I will need to understand how this vendor would scale and my design would look for the 20,000 desktops that my environment is projected to be.

The default answer of most vendors will probably be it’s just a cookie cutter approach and you can just stamp out the same build as what was tested. This is not good enough and you should press them harder for real answers.

Questions to understand

These are several data points that you should understand when looking at larger designs or how you will scale from a starting amount to your future desired state.

  • What are the cluster sizing limitations?
    • Example would be can I only create clusters of 500 or 1000 users as an example, which means I need 20-40 clusters to reach 20,000 desktops.
  • As you scale how does this affect management story?

 

Conclusion

If you evaluating platforms for your currently or future VDI/EUC environment then you have probably been looking at LoginVSI results. When going through your normal solution evaluation process be sure that you consider all of the points explained in this post, especially when trying to make sense of testing results. These will help you better understand how things were tested and also whether someone is trying to spin things unfairly in their favor.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Proud to announce that Architecting EUC Solutions book is now available

It’s been a long journey over the past year, but I’m proud to announce that the Architecting EUC Solutions book is finally available. The book focuses on helping you develop your design for modern EUC solutions. It touches briefly on the strategy and roadmap phases of these projects also. The chapters are created so that each one covers a different topic and they range from all of the EUC solutions, operations, infrastructure and all parts required for a design.

The content in the book is very vendor neutral and is not a blueprint on how to write a VMware Horizon or Citrix XenDesktop design. Instead, it takes an approach of educating architects on what questions to ask and how to evaluate alternatives. Then apply these to the solution of your choice.

Pageflex Persona [document: PRS0000038_00067]

I would like to thank Sean Massey for helping by contributing some content for the book and Kees Baggerman for stepping up as a technical reviewer for the book. I hope that if you read the book, that you enjoy it and it’s able to help you on your design journey. If you are not into EUC but are looking for design related content, you may still find some helpful chapters as there are not that many books on IT architecture.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Radical VMware EUC business ideas

We are almost a month into 2016 and the stream of 2015 recap and 2016 prediction articles and blog posts have finally stopped. But out of these and some recent thoughts, I came up with a few radical ideas for the VMware EUC business unit.I’m well aware that these have pretty much no chance at coming true, but I thought they would make for some interesting conversations. Who knows maybe the right person will read this and it will all come true.

I’m well aware that these have pretty much no chance at coming true, but I thought they would make for some interesting conversations. Who knows maybe the right person will read this and it will all come true.

 

Create an open alternative 

Divorce VMware EUC products from only working on vSphere as the hypervisor platform. This does not mean that the pair is not a good match, it just means that it could be a great opportunity to open up to other hypervisors. Today this is the approach that Citrix offers by supporting multiple hypervisors and offering their customers the choice of platform to deploy on. This flexibly helps by allowing others to drive innovation and offer different cost alternatives when it comes to licensing. With VMware working on building Project Enzo as the future of EUC, which is being built from the code from the Desktone DaaS product. The original Desktone product supported multiple hypervisors, so the code and support was already there to start with.VMware Workspace can still work closely with VMware on vSphere integrations, but they would be focused on using API’s and should be loosely tied to specific vSphere versions. Today if you deploy the whole horizon suite stack with vSphere and maybe even VSAN. Well if you need to update vSphere for a bug or new feature or want the latest VSAN release a hypervisor upgrade is required.

VMware Workspace can still work closely with VMware on vSphere integrations, but they would be focused on using API’s and should be loosely tied to specific vSphere versions. Today if you deploy the whole horizon suite stack with vSphere and maybe even VSAN. Well if you need to update vSphere for a bug or new feature or want the latest VSAN release a hypervisor upgrade is required. A Horizon upgrade usually forces a hypervisor upgrade usually forces a Horizon upgrade unless it’s just a maintenance release. These updates may also force a database version update if you were lagging in your SQL version already. As the stack continues to grow the cascading dependencies has gotten very long and upgrade one later can force a number of other upgrades that you were not interested in doing.

By offering a more open integration with vSphere this could be reduced, also by support other hypervisors you open up a number of other alternatives. VMware has been focused on trying to build the ultimate EUC product suite offering and giving customers a single vendor to deploy. But the reality is that most customers usually are still deploying at least another vendor for profile management (UEM) and monitoring tools. It’s a long a hard process for a single company to build a perfect product suite offering. This does not mean VMware should stop trying but by offering

It’s a long a hard process for a single company to build a perfect product suite offering. This does not mean VMware should stop trying but by offering a all VMware and an open alternative they can give customers the ultimate choice. By having more freed around the

By having more freed around the hypervisor VMware Workspace could build their next generation management layer in any cloud service. This means that they could host or allow customers to build Enzo management layers in AWS or Azure rather than being stuck with VMware Cloud offerings that are not as widely adopted. Below are the current VMware suite offerings, there are a number of options and you can see as you move up there are a large number of products.

Below are the current VMware suite offerings, there are a number of options and you can see as you move up there are a large number of products.

vmware suites.001

 

The open alternatives would allow customers to seek out the best products in the supporting products and not have to pay for VMware legacy products when they want other alternatives.

open suites.001

 

Spin out EUC Business unit

The idea here is that VMware would spin out the EUC business unit into it’s own private or publicly traded company. This idea would make the first idea a lot more feasible. We will call this new entity VMware Workspaces, nobody better register that domain name before I get the chance.

VMware workspaces would own the Horizon (Apps & VDI), App Volumes, UEM, Mirage, workspace, ThinApp, and AirWatch products. They should probably also own and control the Workstation and Fusion products since they are desktop related also.

The new entity can still work closely with the VMware hypervisor team and other teams to continue to build integration between the core products and the EUC products. But the new VMware workspace company is free of any legacy core data center baggage and roadblocks that keep them from innovating and moving to market faster. I think this new freedom could greatly benefit the existing products, processes and allow for the new entity to advance their solutions faster.

VMware Workspace would also be free to offer integrations with 3rd party solutions and provide greater options for customers to build solutions from as discussed earlier. They would still have an all VMware option, but by offering the open option the product licensing can be reduced.

I also think that a separate EUC entity would be free to innovate without having to conform to legacy VMware products. A good example around this would be EUC monitoring. The existing vROPS monitoring tool is heavily deployed for core VM management, but does not do a great job for EUC monitoring. It also is very polluted when you are forced to load different management plug-ins to support different platforms. By being free to create their own monitoring tool a better product could be released.

In the past, this probably would not even be possible, but in the last 2 years VMware has built a great leadership team in the EUC business unit. Along with acquiring several products in this time period they have also grabbed up some excellent leaders, ones that are better to lead a company rather than a business unit. The executive team for a future VMware Workspace entity could be Sanjay Poonen as CEO, Harry Labana a VP, Noah Wasmer a VP, Shawn Bass CTO. Also with growth of the EUC product sales and AirWatch purchase there is now probably plenty of revenue to support a separate entity.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

VMware Horizon 6.1 brings new features and a peak at the future

Today brings another update to VMware Horizon, version 6.1 is being announced. With this update comes several new features and a peek at a few others expected in a future release. The NVIDIA GPU support is the worst kept secret, since it was announced that vSphere 6 would have vGPU support. It was only going to be a matter of time until Horizon was updated to take advantage of the new vGPU feature.

Note: Some of the tech preview items will only be available via the public VMware demo site or via private requests. Not all tech preview items will be included in the GA code like many have been in the past.

The summer of 2014 saw the release of Horizon 6.0 and the ability to present RDS based applications. It was missing a number of features and VMware quickly closed the printing gap in 6.01. Today in 6.1 we are seeing several new features which I will cover in more detail. A few other features will enter tech preview mode and are likely to be released in an upcoming version.

new features

 

 USB Redirection

In 6.1 the ability to redirect USB storage devices for Horizon applications and hosted desktops will now be available. This helps close another gap that existed. It will only be available in 2012/2012R2 OS versions.

usb redirect

 

Client Drive redirection

This is something that has been available in Citrix XenApp since the stone ages. It will only be available in tech preview for now, but I’m sure we will see this some time this year. Initial support for Windows only clients with other OS’s coming later.

client drive

Horizon Client for Chromebooks

The current option in you want to use a Chromebook as your endpoint is to access Horizon via the HTML 5 web browser. This limited you to only connect to a desktop, because Horizon apps were not supported over HTML5. Without a proper client pass-thru items such as USB devices were not possible either.

The Horizon client for Chromebooks will be based on the Android version that has been around already. There has been growing demands for this client. This will be available as a tech preview sometime in Q1/Q2 of 2015.

Cloud Pod updates

The cloud pod architecture was released last year to provide an architecture for building a multi-site Horizon install. The initial version was not that attractive in my eyes. The updated version in 6.1 brings the configuration and management parts of cloud pod into the horizon manager. The previous version had to be done via command line and global entitlements were not shown in the Horizon manager.

Other Items

We are also see a number of other check the box type items that are expected due to vSphere 6 updates.

  • VVOL support Horizon 6 desktops
  • VSAN 6 support
  • Large cluster size support for VSAN6 and higher densities
  • Support for Windows 2012R2 as a desktop OS
  • Linux VDI will be a private tech preview option

 

 

 

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 41234
%d bloggers like this: