So I’ve been working with a customer on a specific use case that required extensive use of VMware View Local Mode. I will explain more about this in a moment. To sound a bit like a bad TV show, the names in this story have been changed to protect the innocent. First I’ll talk a bit about the customers requirements and then explain how View Local Mode works.
Now on to the customer use case that brought up all these questions and led me to do some deep dive research into View Local Mode operations. The use case that I was looking into was for a consulting firm. They have teams of consultants that work at customer locations 80% of the time and are only in a remote office 20% of their time. There would be 1500 mobile users and 500 office workers who would be working in a connected mode, meaning they are always in an office or a location with a network connection.
So naturally we talked about several designs that might work for them. There are 2 primary ones that would meet their needs and both would be built with VMware View 4.6.
This design would use VMware View 4.6 to provide virtual desktops to 2000 users. The office workers are the easy part. They would be provided virtual desktops via Linked Clones and their profiles will be layered with one of the 3rd party profile tools. A few of the tools out today are AppSense, Liquidware Labs Profile unity, RingCube, UniDesk and several others.
Now the mobile users would be provided persistent desktops from View with the option to check out for Local Mode. This would allow users to check out their desktop so that it will run locally on their laptop. The checkout process will take a while because the first time a user checks out they must download the entire virtual machine. Once checked out they can replicate changes back to the datacenter to keep the copy that is locked in the datacenter up to date. This way if there is a disaster on their laptop they can recover up to the point of their last sync. This method is pretty straightforward to design, the only drawbacks with this method would be the additional disk space required and they will need to be managed like a standard PC when it comes to OS patching. The benefit to this method is by using persistent virtual machines the user only needs to check out the entire VM once, unless they are checking it out on a different end point. This greatly reduces time and bandwidth requirements.
With this design we are still trying to accomplish the same goal, were just going about it a different way. The connected office workers will be designed in the same manor as Design #1. The difference comes in how we design for the mobile users. In this architecture we want to use the benefits of Linked Clones in VMware View. This will allow us to save on disk space and will take less effort to manage OS level patching. Since there is just a parent image to keep up to date and then all Linked Clones will pull from that image.
The tricky part comes in with using the Transfer servers and users having to do the initial image sync on check out. Then each time the parent image is recomposed for something like patching every Local Mode user will have to download the entire parent image again. This is a lot of data to pull down for 1500 users across 45 remote offices. So we will need a method to ease this burden.
The initial idea was hey we can just put the View Transfer servers out in the remote offices and users can pull their data for a local server. Well that turned out to be not possible, I will explain in more detail below. The option that was uncovered was the ability to use a Web proxy to cache data at the remote site that the users data would flow through. This proxy would only be able to cache the parent image data since other disks would be user specific. Once the first user pulled down the updated parent image the proxy would populate the cache and would speed up the process for the next users. You can find out more about this in the View administration PDF guide. The OS delta disk and user persistent disk would still be pulled down from the datacenter across the WAN in this design.
Facts about VMware View Transfer servers
A transfer server is a server that will handle the communications for users when they check out or in a View desktop. They will access a compressed version of the parent image being used for the Linked Clone View pool that the user is a member of. If you are allowing a persistent desktop to be checked out the transfer server does not cache these and it will just be pulled directly from the datastore that it sits on.
- Transfer server must be a virtual server on vSphere & part of same vCenter of View install
- Transfer servers should be kept in Datacenter near vSphere hosts and storage that contains the parent image
- They do not cache the delta disks or user Persistent disks, these must be pulled directly from the source
- You can check out and in desktops via View Security server but speed is slower, around 50% of direct speed
- After a recompose of parent image you will be required to download entire image again
- VMware recommends about 20 max concurrent transfers per server. At this point through testing a 1gb network connection will become saturated. So you will need to scale the number of transfer servers based on this. It really depends on how many concurrent transfers you expect to have as there is no assigned users hard limit.
- If you have multiple transfer servers they will use a repository to store the compressed image, this is just a CIFS or NFS share that all server must have access to.
If you have more questions about how anything works on this process drop your question in the comments and I will try and get you an answer. I will also try and keep this post up to date as new things are discovered about the Local Mode process.
Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design