VMware Horizon 6.1 brings new features and a peak at the future

Today brings another update to VMware Horizon, version 6.1 is being announced. With this update comes several new features and a peek at a few others expected in a future release. The NVIDIA GPU support is the worst kept secret, since it was announced that vSphere 6 would have vGPU support. It was only going to be a matter of time until Horizon was updated to take advantage of the new vGPU feature.

Note: Some of the tech preview items will only be available via the public VMware demo site or via private requests. Not all tech preview items will be included in the GA code like many have been in the past.

The summer of 2014 saw the release of Horizon 6.0 and the ability to present RDS based applications. It was missing a number of features and VMware quickly closed the printing gap in 6.01. Today in 6.1 we are seeing several new features which I will cover in more detail. A few other features will enter tech preview mode and are likely to be released in an upcoming version.

new features


 USB Redirection

In 6.1 the ability to redirect USB storage devices for Horizon applications and hosted desktops will now be available. This helps close another gap that existed. It will only be available in 2012/2012R2 OS versions.

usb redirect


Client Drive redirection

This is something that has been available in Citrix XenApp since the stone ages. It will only be available in tech preview for now, but I’m sure we will see this some time this year. Initial support for Windows only clients with other OS’s coming later.

client drive

Horizon Client for Chromebooks

The current option in you want to use a Chromebook as your endpoint is to access Horizon via the HTML 5 web browser. This limited you to only connect to a desktop, because Horizon apps were not supported over HTML5. Without a proper client pass-thru items such as USB devices were not possible either.

The Horizon client for Chromebooks will be based on the Android version that has been around already. There has been growing demands for this client. This will be available as a tech preview sometime in Q1/Q2 of 2015.

Cloud Pod updates

The cloud pod architecture was released last year to provide an architecture for building a multi-site Horizon install. The initial version was not that attractive in my eyes. The updated version in 6.1 brings the configuration and management parts of cloud pod into the horizon manager. The previous version had to be done via command line and global entitlements were not shown in the Horizon manager.

Other Items

We are also see a number of other check the box type items that are expected due to vSphere 6 updates.

  • VVOL support Horizon 6 desktops
  • VSAN 6 support
  • Large cluster size support for VSAN6 and higher densities
  • Support for Windows 2012R2 as a desktop OS
  • Linux VDI will be a private tech preview option





Read More

Configure Active Directory authentication for Nutanix Prism

The more I work with Nutanix the more I learn and like about the product. There have been a few things that have been on my to do list lately and a few ideas spawned from customers. So I will be writing up some articles about these topics and enable AD authentication is the first one.

In this post I will walkthrough the steps needed to enable AD as a source for authentication. You will still be able to use local accounts if you wish.

Configure AD source

The first step here is to create a link to the AD domain that we wish to use for authentication. Use the settings icon in the upper right of the Prism interface for Nutanix. Find and click on the Authentication choice as shown below.



This will open a new window that will allow you to configure a new directory source. As shown in the image below click the button to configure the details for your AD domain.



On the first line you will input a friendly name for the domain, this did not seem to allow spaces. The second line is the actual domain name. The third line is the URL for the directory and needs to be in the format shown below. I used an IP address to keep things simple in the lab. The fourth line will allow you to choose the type of directory, currently it only support AD.



Once you have input the AD details and saved them you will be taken back to the following screen with a sample shown below. It should now list summary information about the AD domains configured for Prism. In my tests I configured two different domains.



Role Mapping

The idea of role mapping is to select an individual AD entry or group and assign them a level of access in Prism. You get this started from the settings menu again, by selecting Role Mapping shown below.



A new pop-up window will open shown below. Click on the new mapping choice to get started.



From here the first line you will choose which AD domain you will be using for this role mapping. The second choice you must choose what you will be mapping to, the options are AD Group, AD OU or a user. The third choice is what role in Prism will you be assigning the mapping. In the values field you will need to input the name of the AD item you will be mapping to. I choose group so I need to input the AD group name.

Note: It will accept inputs that are not correct, meaning it does not seem to validate them. I input the group name in all lowercase, this did not work but was accepted. I came back later and changed to reflect capital letters as shown in AD and it worked right away.



After entering and saving your new mapping the screen below shows the new entry. You can add more mappings, edit or delete an existing mapping from here also.



The image below just shows the proper group name after I came back and updated.



Next it was time to try and authenticate to Prism. So I attempted to login using the different methods of entering a user name. It does work with the username@domain.name string, but did not like the domain_name\user.name option.



And once logged in, the upper right corner of Prism shows the authenticated user. It was now showing my username.



Overall the process was pretty simple for setting this up. I had it working in about 15 minutes.


Read More

No one application delivery strategy to rule them all

This topic is something that has been coming up more often in conversations with customers when talking about architecture of a modern EUC environment. Enterprises are looking for better ways to manage computers that their users rely on for their work each day. A big portion is application functions such as installing, updating and controlling access. A common request is I don’t want multiple ways to do this type of work, one approach is the desire. To that I say

“No one application delivery strategy to rule them all”


onering Unico_Anello

I understand the desire to have one master way to package and deliver applications, especially for large clients. There are plenty of options for doing this. I think that depending on which method you choose it might be ideal for the physical world but break many of the benefits in a virtual world or vice versa. A customer that might have 100,000 users, but only intends on virtualizing 20,000 users. They will be left with two very large environments to manage.

The physical computer environment is very static, customers tend to push applications to computers. This push typically does not need to closely follow the provisioning of the PC. There can be a bit of a delay for the apps to install. Customers are exploring other options such as RDS based options and application virtualization such as AppV, ThinApp and others to help with these issues.

In a virtual desktop environment desktops are provisioned quickly and applications need to be present and ready at the time of user login. There typically is not time in most environments to wait for the classic application push approach, because the desktop may be disposable and would need a push every day or more. Users will also not be willing to wait for the apps to appear after login. Vendors like VMware and Citrix have built multiple options for delivering applications at the point of desktop creation or user login.

The problem breaks down to if you move your legacy physical strategy into the virtual world you will break or loose some of the features and values that virtual desktops delivers. If you want to adopt the tools from VMware or Citrix you will then have to license this application technology for all of the physical devices and that can be very expensive.

This is why I think that people need to be comfortable with having a two strategies. One to modernize their physical PC environment and one for the virtual desktop environment and seek to offer the best and most complete offering in each space. This may or may not require you to package apps twice, but will result in you being able to provide the best possible solution.




Read More

Creating a VMware Datastore on DataGravity storage

I have recently been evaluating and getting to know the initial storage offering from DataGravity. In short they offer a unique storage array that offers hybrid storage and storage analytics all in one simple and easy to use offering. As I work with the product I will probably write up a few blog posts on how to work with things. Expect a detailed review over at Data Center Zombie soon after the new year.

I’m finding the product to be very easy to work with and thought a simple walk through on how to create a new export that will be mounted as a VMware datastore would be helpful.

Step 1

Upon logging into the management page for a DataGravity array you will see the following welcome screen. I will be creating some new storage, so I will click on the storage choice to proceed.

DataGravity Datastore - Step 1

DataGravity Datastore – Step 1


Step 2

The storage choice displays a number of options once clicked on. These are the major functions for creating and managing storage on the array. Click on the Create Datastore to proceed with the task for this post.

DataGravity Datastore - Step 2

DataGravity Datastore – Step 2


Step 3

This first step of creating the Mount Point that will be a datastore is to provide a name, capacity sizing and an option description.

DataGravity Datastore - Step 3

DataGravity Datastore – Step 3


Step 4

This step is where you will grant access for the hosts in the clusters that will utilize this new datastore. The image shows that I have already added the first host and by clicking the blue plus button you can add the other hosts.

DataGravity Datastore - Step 4

DataGravity Datastore – Step 4


Step 5

The following image shows the process for adding another host. You can enter the host name or IP address for enabling access.

DataGravity Datastore - Step 5

DataGravity Datastore – Step 5


Step 6

The policy step is where you can select an existing Discovery Policy or create another. In short these policies govern how the data is analyzed and protected. Once ready, click the Create button at the bottom and it will then be ready to configured on the vCenter side.

DataGravity Datastore - Step 6

DataGravity Datastore – Step 6


Step 7

Now that the mount point is ready I have selected one of my vSphere hosts and will add NFS storage to it. I have provided the IP for the data path to the storage array. The Folder is the same as the mount point name that we created earlier. And the datastore name can be what you like, I have made it the same as the mount point name.

DataGravity Datastore - Step 7

DataGravity Datastore – Step 7


Step 8

Once all of the steps to create the mount point and it’s presented on the VMware side I have taken a look back in DataGravity to list the mount points on the array. From here you can see what was created along with details about capacity and protection policy.

DataGravity Datastore - Step 8

DataGravity Datastore – Step 8


Step 9

The last view here is looking at our new mount point created. I have moved a few VMs onto the datastore and details about them have already started to appear. DataGravity is VM-aware so you have access to more data than a legacy array would show.

DataGravity Datastore - Step 9

DataGravity Datastore – Step 9


By now you have an idea on how easy it was to create and presented a new datastore. The other functions on DataGravity are also very easy to use.


Read More

Tintri blows the top off with 3 new models and a bucket of features

Tintri has been selling storage arrays for almost four years now and we have seen them go from a single model offering to the current three models. Along the way they have added features and supported additional hypervisors. Today Tintri is blowing the top off of things and announcing their biggest update since coming out of stealth. If you would like to learn more about Tintri you can read the in-depth Tintri review over at Data Center Zombie.

The T800 Series

As part of todays release is a new set of storage arrays from Tintri. The T800 series is a line of bigger, faster and more VM storage arrays. I like the name T800, kind of reminds me of another T800 that kicked a lot of butt in the Terminator movies. The T800 series is utilizing Tintri’s new compression feature on the capacity tier to offer greater effect capacity. The compression is all done in-line and the values are based on a modest 2:1 sample. Every model in the 800 series offers more flash than the pervious 600 series of arrays.

  • T820 – 23TB effective capacity and 750 VMs
  • T850 – 66TB effective capacity and 2000 VMs
  • T880 – 100TB effective capacity and 3500 VMs



Tintri OS 3.1 Updates

Along with the new hardware todays announcement covers the new Tintri OS version 3.1. This update brings a number of new features along with bringing support for the new array models. The latest OS update is applicable to both the 600 and 800 series of storage arrays.

SRM support – Tintri was already offering storage based replication on a per-VM basis which was a great feature. The new update brings support for VMware SRM to allow for the automated fail-over and recovery of VMs.

Encryption – Another new feature is data at rest encryption on all T800 and T600 series storage arrays. Tintri is using self encrypting drives in these arrays and OS version 3.1 brings internal key management. At some future date we will see Tintri offer support for external key management options.

Automation updates – In the previous OS update Tintri released their PowerShell support as the first step in allowing admins and applications to automate the storage platform. With todays news they are announcing the brand new REST API interface. This will allow customers to further build automation and interface with the storage in new ways.

Hyper-V supporting is coming soon. In my briefing with the Tintri product team they mentioned that Hyper-V support is expect to be released in late Q4 of this year.



Read More

CloudPhysics helps unearth some scary statistics in the Halloween spirit

I have written about CloudPhysics in the past and I think the tool has some cool features. They continue to add features and think up new ways to use all the data they collect about your data center and compare to others.

There are some scary things that can hide in your data center. The infographic below was pretty cool. The part that scared me was 22% of vSphere 5.5 hosts are still un-protected for SSL Heartbleed vulnerability. Take some time if you have not already and get to know the CloudPhysics offering.

You can read more about the features at the official CloudPhysics blog post here.




Read More

People VMware vCAC is not easy, it takes effort to get value

I tend to get the feeling that many customers expect that if they purchase vCloud Automation Center (vCAC) some type of magic will happen. Maybe it’s a disconnect in how the product is marketed or something else, but the message is not getting through. I’ve noticed this through customer meetings before a sale or during the architecture phase of a project.

vCAC as a product can be very powerful and does allow for a lot of flexibility to solve complex problems. But vCAC does not contain a bunch of magical workflows that will automate your data center out of the box. The product offers a self-service portal, a service catalog, machine blueprints, ties in with automation tools and the ability to surface data from ITBM for business reporting.

If you want to do anything beyond allowing someone to self provision a template from vCenter, you need to do custom work. This work will be creating blueprints to build custom properties and logic and tie in with vCenter Orchestrator or another tool for the more complex tasks. This is where all the magic is accomplished. The point to drive home here is that just installing vCAC does not win you the battle or give you a feature rich “Cloud”. You will need to invest a lot of time or hire someone to build out the automation and orchestration that provides the real value in the solution.

I don’t intend to scare anyone way, but rather just clear up what should be expected from a project based on vCAC from a high level.


Read More
Page 5 of 51« First...34567...102030...Last »