Rubrik reaches version 2.0 in a few short months

You may be thinking that it has only been a few short months since we first heard from Rubrik as they exited stealth mode, well you would be right. The team at Rubrik must not like to sleep much, because they have been working hard on several small code releases and are proud to announce they have reached version 2.0. This is a big deal for such a young company in a short amount of time.

There are a number of new impressive features in the new version along with an additional to the hardware appliance line. I’ll cover these in more detail below.


Version 2.0 Features

Replication – Rubrik will now have replication as part of their feature set. This replication is from site to site or box to box, however you want to say it. Whether I have a single brik or multiples in each site, with replication I can now replicate my protected VMs from site A to site B, as well as still keeping an archive copy in the cloud. This is a highly sought after feature and would be required by most organizations if they plan on adopting and have multiple sites. The replication enablement ties in smoothly with Rubrik’s SLA policies in that you turn on replication at the SLA level and any VMs protected by the policy will be replicated. This is flexible enough that you can replicate from as low as a single VM to every VM and all points in between. And lastly on replication the task is able to remain WAN efficient by taking advantage of global deduplication to reduce the amount of data needing to be sent between sites.

Larger Brik – With the release of 2.0, Rubrik is also announcing the availability of the R348 brik. This model appliance contains nodes that have 8TB drives, versus 4TB drives included in the original appliances. The 348 will allow customers to start with a larger capacity option or with the ability to mix and match briks you can add the type of node that will meet your backup capacity requirements.

AD Integration – The original release only had the ability to use local accounts for authentication. Version 2.0 brings the ability to use Active Directory accounts to authenticate to the Rubrik management interface for managing the appliance and backups. This is a normal progression for young products and is a must for larger organizations, good to se Rubrik checking off the required boxes in a timely fashion.

Swift Support – Since it’s release Rubrik has had the ability to archive backup data to Amazon, with 2.0 you will have the ability to archive to Swift the object based storage from the Openstack project. This will allow customers to archive to local or remote managed Swift storage, or to cloud hosted Swift storage. This is just another destination option for storing your long term archive backups for customers to take advantage of.

Application Aware Backups – This is something that will make a lot of admins and application owners happy. With 2.0 the ability to leverage VSS provider to protect MS SQL, MS Exchange, Active Directory and SharePoint workloads will be welcomed. This is also a major requirement for most organizations when considering a backup solution.

Detailed Reporting – With the new version, Rubrik is improving their ability to inform the admin on what is happening with the solution. The reporting will improve technical detail, active job information and failure reporting along with other data. This is a natural growth path as feedback is taken from customers and testers of the product.

Capacity Planning – There was already reporting on how much space was being consumed via backup data in the first release, with 2.0 Rubrik is improving this data. You will also get insight into daily growth rates and get an estimate on how much capacity is left in the number of days till your current nodes would be filled up. This will allow you to adjust your archive to cloud config or know when to purchase additional capacity.

Compliance Reports – This is a simple way to see which VMs are in or out of compliance with their backup policies. In short do I have VMs that failed or missed a backup, which puts them out of compliance.

Auto Protect – In the first release you would assign VMs to a protection policy and they would be backed up. With this new feature, admins have the ability to apply a policy at the vCenter, Data Center, Folder, Host and cluster levels. This will allow protect any existing or new VMs at that level.

model table


The 2.0 release is a pretty large update for Rubrik and has some great new features. I look forward to the upgrade and getting some hands on time with the update soon.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Nutanix .Next Conference announcements summary and recap

Today kicks off the first day of Nutanix’s first conference. It’s a day that’s been anticipated for a while and I really wish I could have been present. The .Next conference is certainly going to be the hottest hyperconverged infrastructure event of 2015. There is a ton of details and announcement coming out of .Next today and I’m going to try and break them down for you here. Over the next couple of weeks, there will be deeper level conversations on many of these topics and may lead to further blog posts.

To find out more about Nutanix read my detailed Nutanix product review at Data Center Zombie and Hyperconverged comparison article.

From One product to Two

Historically Nutanix has sold only a single product called the Virtual Computing Platform. I never heard many people referred to it by the product name, since there once only a single product it was always just referred to as Nutanix. Today Nutanix is announcing that they are splitting the platform into two separate products that fall under the new Xtreme Computing Platform (XCP) badge.

next4 copy


In the past, Prism was just part of the entire Nutanix solution. It’s been the primary point of management for several versions now. Today Nutanix has announced that Prism is becoming its own product and will be available in this form sometime in Q4 of 2015. As pat of this additional functionality is being added that I will cover later in this post. Nutanix Acropolis is the second product under this new approach. Acropolis will contain all of the storage functionality and data services commonly contained within the Nutanix Distributed File System (NDFS). There are also many new features being released and announced as part of Acropolis that I will cover soon.

next5 copy


Nutanix Acropolis

While many in the industry are going to laser focus on the KVM based Acropolis hypervisor option, Acropolis as a product is far more than that. The image below shows how Nutanix things of features within Acropolis as being contained in two major buckets. The App mobility fabric which seems to focus on availability, automation and integration with the greater stack. The Distributed Storage Fabric portion is about data protection, storage services, and hypervisors.



The following image touches on how the App Mobility Fabric features provide the mobility, automation, availability and integration mentioned above. These are by default not tied to a hypervisor platform, but may have different release dates for the different hypervisors. In Q4 this feature is going to expand to offer VM migration between hypervisors, that would allow powered off VMs to be moved between vendors all while staying on the Nutanix storage solution. This opens up a lot of opportunities for using the different hypervisor platforms between environments or projects within the same organization with ease.



If you follow the blogs and Tech rags you have probably seen the rumor that Nutanix was creating their own hypervisor to compete with VMware vSphere. Well that was a rumor, but based on how some of these updates are messaged I can begin to see how people jumped to conclusions. Nutanix still supports vSphere, Hyper-V and KVM like they have for some time now. What they are doing is working on improving the availability and operational management story around the KVM hypervisor. This new approach along with the other hypervisors is what Acropolis is bringing to market.



Many of the features that now fall under Acropolis have been available in the Nutanix platform for some time. The image below explains what’s available today and the following images touch on what some of the new features are.



In release 4.1.3 Nutanix will bring an Image Service to the KVM based hypervisor offering.



The new KVM High Availability (HA) feature is still only Tech Preview, but has to be one of the most exciting features. This will bring a similar experience to KVM as VMware customers have been enjoying for years with vSphere HA. No longer will VMs require intervention to auto restart of remaining nodes in a cluster upon a host failure.




Nutanix Prism

We can see from the image below that Prism is focused on providing and improving the one-click management features that are already there. Also one-click operation insight that is focused on increasing the visibility of what is happening within your environment and lastly expanding Prism to perform one-click troubleshooting.



The upcoming version of Prism later in 2015 is going to take the features we know and love today to the next level along with adding a bunch of new ones. The slideshow below gives a little sneak peak on what we can expect. The VM Management offering is being expanded, there is going to be a L2 network configuration option to assist with cluster deployments. The storage management capabilities are being improved. And lastly the cluster management, one-click type upgrades that we like today will continue to be a theme for other topics.


Operational Insight Features

This operation insight area of Prism is going to be a big section of improvement and new features over the existing versions. We can to see a Service Impact Analysis, that will help correlate alerts to application impact. The upcoming Root Cause analysis will help isolate issues and lean on knowledge from the Nutanix global community. And a remediation advisor will give targeted recommendations for how to optimize the solution.


I’m excited to announce and see that capacity planning is going to be coming to the Prism product in Q4. I think that this is an important addition for Nutanix and it looks like they took their time to release a great product the first time around. There will be a capacity trends behavior set of features to report on what is going on in your environment and provide time estimates. Also an optimization advisor will be there to help you find space that might be able to be reclaimed, such as unused VMs, snapshots and other items. And last up a What-if analysis will estimate what resources may be needed for different scenarios. The cool option is this features will have the ability at some point to send data into the Nutanix sizing calculator to allow for a better sizing and scaling exercise.


A new search feature called Prism Instant Search will be coming also. The image below shows what some of the things you can do with the search feature. Think of it as a Google like search experience that allows Nutanix admins to find and accomplish tasks faster. I’m looking forward to seeing



Last up is the ability to create customizable dashboards in Prism. I’ve had a few customers looking for this ability and they will be happy that it’s coming later this year. The image below looks like a great example that show specific workloads and other alerts for a focused view that a specific role would be interested in. I think that I also heard that real-time stats would be available from Prism, today there is a slight time delay or a page refresh to get the updates.



As you can see there is a lot of new and updated news from Nutanix .Next. I think that most everything was covered, but I may have missed a few things which I will try and update as I hear of any.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Nutanix presents Community Edition to the world

Nutanix community edition (CE) is not the best or worst kept secret. A community edition of their storage platform is something that’s been loosely talked about for some time now and most recently as a few months ago Nutanix leadership mentioned it in some public interviews. Nutanix has been looking for a way to allow people in the tech community an easier way for them to get some hands on experience with their platform.

As a Nutanix Technology Champion (NTC) I was invited to participate in early trials of community edition with others NTC’s. We were provided with briefings on the CE product and provided access to install and participate in a private forum to provide feedback on the product and testing.

The community edition will become available to the community during the .Next user conference in early June. You can sign up here to get notified when CE is ready to go.


What is Community Edition

Simply put Nutanix community edition allows you to build a small hyperconverged cluster in your home lab. The product has a automated install. Although its not the same deployment method as the production product its still better than doing the work by hand. To try and provide flexibility in the hardware that people could use with CE they would not be able to use the same deployment that is used for their production HCI appliances.

With Nutanix CE you will be able to build a hyperconverged lab that consists of one to four nodes. This will allow people to build a single node Nutanix install for those that do not have a lot of hardware to play with the product. And those that have large labs can build a three or four node cluster for a larger install. This type of install could provide some serious power to people’s home labs.

The Nutanix CE install is only on bare metal today. It uses KVM as the hypervisor and deploys the controller VM (CVM) on top of it as normal. Home laber’s can then deploy VMs on the KVM cluster. After install you end up with a fully functional Nutanix cluster. It can dedupe, compress and perform awesome. There are some big plans such as the same one click upgrades like the production product.

The following is a list of minimal and recommended hardware specs, these are requirements today. I would expect these to loosen up and expand as the CE product matures.

  • Memory 16GB min – 32GB or more recommended
  • Intel CPU, VT-x , 4 core min
  • Intel based NIC
  • Cold tier storage 500GB min 1 per server (Max 18TB, 3x 6TB HDD)
  • Flash 200GB minimum min 1 per server
  • Max number of SSD/HDD drives per node is 4
  • Boot to USB drive or SATA DOM



What I wish it was

I think that community edition is a great idea and hope that it’s a big success. But as a avid home lab person, I am not crazy about the idea of having to dedicate physical hardware for this. I know that I will get better performance and an experience more like the real product. But as a community product I think people just want to get their hands on it to play with it. This could be easier accomplished by offering it as a virtual appliance that I can run on my existing lab. No need to dedicate hosts and wipe disks. I can accept that a virtual appliance won’t give me the full experience and performance, but it will allow for more people to play with the community edition. I hope to see this as an option as CE matures.

I would like to say thanks to the Nutanix CE team for the experience and releasing a cool product to the community.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Configure Active Directory authentication for Nutanix Prism

The more I work with Nutanix the more I learn and like about the product. There have been a few things that have been on my to do list lately and a few ideas spawned from customers. So I will be writing up some articles about these topics and enable AD authentication is the first one.

In this post I will walkthrough the steps needed to enable AD as a source for authentication. You will still be able to use local accounts if you wish.

Configure AD source

The first step here is to create a link to the AD domain that we wish to use for authentication. Use the settings icon in the upper right of the Prism interface for Nutanix. Find and click on the Authentication choice as shown below.



This will open a new window that will allow you to configure a new directory source. As shown in the image below click the button to configure the details for your AD domain.



On the first line you will input a friendly name for the domain, this did not seem to allow spaces. The second line is the actual domain name. The third line is the URL for the directory and needs to be in the format shown below. I used an IP address to keep things simple in the lab. The fourth line will allow you to choose the type of directory, currently it only support AD.



Once you have input the AD details and saved them you will be taken back to the following screen with a sample shown below. It should now list summary information about the AD domains configured for Prism. In my tests I configured two different domains.



Role Mapping

The idea of role mapping is to select an individual AD entry or group and assign them a level of access in Prism. You get this started from the settings menu again, by selecting Role Mapping shown below.



A new pop-up window will open shown below. Click on the new mapping choice to get started.



From here the first line you will choose which AD domain you will be using for this role mapping. The second choice you must choose what you will be mapping to, the options are AD Group, AD OU or a user. The third choice is what role in Prism will you be assigning the mapping. In the values field you will need to input the name of the AD item you will be mapping to. I choose group so I need to input the AD group name.

Note: It will accept inputs that are not correct, meaning it does not seem to validate them. I input the group name in all lowercase, this did not work but was accepted. I came back later and changed to reflect capital letters as shown in AD and it worked right away.



After entering and saving your new mapping the screen below shows the new entry. You can add more mappings, edit or delete an existing mapping from here also.



The image below just shows the proper group name after I came back and updated.



Next it was time to try and authenticate to Prism. So I attempted to login using the different methods of entering a user name. It does work with the string, but did not like the domain_name\ option.



And once logged in, the upper right corner of Prism shows the authenticated user. It was now showing my username.



Overall the process was pretty simple for setting this up. I had it working in about 15 minutes.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Creating a VMware Datastore on DataGravity storage

I have recently been evaluating and getting to know the initial storage offering from DataGravity. In short they offer a unique storage array that offers hybrid storage and storage analytics all in one simple and easy to use offering. As I work with the product I will probably write up a few blog posts on how to work with things. Expect a detailed review over at Data Center Zombie soon after the new year.

I’m finding the product to be very easy to work with and thought a simple walk through on how to create a new export that will be mounted as a VMware datastore would be helpful.

Step 1

Upon logging into the management page for a DataGravity array you will see the following welcome screen. I will be creating some new storage, so I will click on the storage choice to proceed.

DataGravity Datastore - Step 1

DataGravity Datastore – Step 1


Step 2

The storage choice displays a number of options once clicked on. These are the major functions for creating and managing storage on the array. Click on the Create Datastore to proceed with the task for this post.

DataGravity Datastore - Step 2

DataGravity Datastore – Step 2


Step 3

This first step of creating the Mount Point that will be a datastore is to provide a name, capacity sizing and an option description.

DataGravity Datastore - Step 3

DataGravity Datastore – Step 3


Step 4

This step is where you will grant access for the hosts in the clusters that will utilize this new datastore. The image shows that I have already added the first host and by clicking the blue plus button you can add the other hosts.

DataGravity Datastore - Step 4

DataGravity Datastore – Step 4


Step 5

The following image shows the process for adding another host. You can enter the host name or IP address for enabling access.

DataGravity Datastore - Step 5

DataGravity Datastore – Step 5


Step 6

The policy step is where you can select an existing Discovery Policy or create another. In short these policies govern how the data is analyzed and protected. Once ready, click the Create button at the bottom and it will then be ready to configured on the vCenter side.

DataGravity Datastore - Step 6

DataGravity Datastore – Step 6


Step 7

Now that the mount point is ready I have selected one of my vSphere hosts and will add NFS storage to it. I have provided the IP for the data path to the storage array. The Folder is the same as the mount point name that we created earlier. And the datastore name can be what you like, I have made it the same as the mount point name.

DataGravity Datastore - Step 7

DataGravity Datastore – Step 7


Step 8

Once all of the steps to create the mount point and it’s presented on the VMware side I have taken a look back in DataGravity to list the mount points on the array. From here you can see what was created along with details about capacity and protection policy.

DataGravity Datastore - Step 8

DataGravity Datastore – Step 8


Step 9

The last view here is looking at our new mount point created. I have moved a few VMs onto the datastore and details about them have already started to appear. DataGravity is VM-aware so you have access to more data than a legacy array would show.

DataGravity Datastore - Step 9

DataGravity Datastore – Step 9


By now you have an idea on how easy it was to create and presented a new datastore. The other functions on DataGravity are also very easy to use.


About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Tintri blows the top off with 3 new models and a bucket of features

Tintri has been selling storage arrays for almost four years now and we have seen them go from a single model offering to the current three models. Along the way they have added features and supported additional hypervisors. Today Tintri is blowing the top off of things and announcing their biggest update since coming out of stealth. If you would like to learn more about Tintri you can read the in-depth Tintri review over at Data Center Zombie.

The T800 Series

As part of todays release is a new set of storage arrays from Tintri. The T800 series is a line of bigger, faster and more VM storage arrays. I like the name T800, kind of reminds me of another T800 that kicked a lot of butt in the Terminator movies. The T800 series is utilizing Tintri’s new compression feature on the capacity tier to offer greater effect capacity. The compression is all done in-line and the values are based on a modest 2:1 sample. Every model in the 800 series offers more flash than the pervious 600 series of arrays.

  • T820 – 23TB effective capacity and 750 VMs
  • T850 – 66TB effective capacity and 2000 VMs
  • T880 – 100TB effective capacity and 3500 VMs



Tintri OS 3.1 Updates

Along with the new hardware todays announcement covers the new Tintri OS version 3.1. This update brings a number of new features along with bringing support for the new array models. The latest OS update is applicable to both the 600 and 800 series of storage arrays.

SRM support – Tintri was already offering storage based replication on a per-VM basis which was a great feature. The new update brings support for VMware SRM to allow for the automated fail-over and recovery of VMs.

Encryption – Another new feature is data at rest encryption on all T800 and T600 series storage arrays. Tintri is using self encrypting drives in these arrays and OS version 3.1 brings internal key management. At some future date we will see Tintri offer support for external key management options.

Automation updates – In the previous OS update Tintri released their PowerShell support as the first step in allowing admins and applications to automate the storage platform. With todays news they are announcing the brand new REST API interface. This will allow customers to further build automation and interface with the storage in new ways.

Hyper-V supporting is coming soon. In my briefing with the Tintri product team they mentioned that Hyper-V support is expect to be released in late Q4 of this year.



About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 41234