Nutanix .Next Conference announcements summary and recap

Today kicks off the first day of Nutanix’s first conference. It’s a day that’s been anticipated for a while and I really wish I could have been present. The .Next conference is certainly going to be the hottest hyperconverged infrastructure event of 2015. There is a ton of details and announcement coming out of .Next today and I’m going to try and break them down for you here. Over the next couple of weeks, there will be deeper level conversations on many of these topics and may lead to further blog posts.

To find out more about Nutanix read my detailed Nutanix product review at Data Center Zombie and Hyperconverged comparison article.

From One product to Two

Historically Nutanix has sold only a single product called the Virtual Computing Platform. I never heard many people referred to it by the product name, since there once only a single product it was always just referred to as Nutanix. Today Nutanix is announcing that they are splitting the platform into two separate products that fall under the new Xtreme Computing Platform (XCP) badge.

next4 copy

 

In the past, Prism was just part of the entire Nutanix solution. It’s been the primary point of management for several versions now. Today Nutanix has announced that Prism is becoming its own product and will be available in this form sometime in Q4 of 2015. As pat of this additional functionality is being added that I will cover later in this post. Nutanix Acropolis is the second product under this new approach. Acropolis will contain all of the storage functionality and data services commonly contained within the Nutanix Distributed File System (NDFS). There are also many new features being released and announced as part of Acropolis that I will cover soon.

next5 copy

 

Nutanix Acropolis

While many in the industry are going to laser focus on the KVM based Acropolis hypervisor option, Acropolis as a product is far more than that. The image below shows how Nutanix things of features within Acropolis as being contained in two major buckets. The App mobility fabric which seems to focus on availability, automation and integration with the greater stack. The Distributed Storage Fabric portion is about data protection, storage services, and hypervisors.

next9

 

The following image touches on how the App Mobility Fabric features provide the mobility, automation, availability and integration mentioned above. These are by default not tied to a hypervisor platform, but may have different release dates for the different hypervisors. In Q4 this feature is going to expand to offer VM migration between hypervisors, that would allow powered off VMs to be moved between vendors all while staying on the Nutanix storage solution. This opens up a lot of opportunities for using the different hypervisor platforms between environments or projects within the same organization with ease.

next10

 

If you follow the blogs and Tech rags you have probably seen the rumor that Nutanix was creating their own hypervisor to compete with VMware vSphere. Well that was a rumor, but based on how some of these updates are messaged I can begin to see how people jumped to conclusions. Nutanix still supports vSphere, Hyper-V and KVM like they have for some time now. What they are doing is working on improving the availability and operational management story around the KVM hypervisor. This new approach along with the other hypervisors is what Acropolis is bringing to market.

next11

 

Many of the features that now fall under Acropolis have been available in the Nutanix platform for some time. The image below explains what’s available today and the following images touch on what some of the new features are.

next14

 

In release 4.1.3 Nutanix will bring an Image Service to the KVM based hypervisor offering.

next15

 

The new KVM High Availability (HA) feature is still only Tech Preview, but has to be one of the most exciting features. This will bring a similar experience to KVM as VMware customers have been enjoying for years with vSphere HA. No longer will VMs require intervention to auto restart of remaining nodes in a cluster upon a host failure.

next16

 

 

Nutanix Prism

We can see from the image below that Prism is focused on providing and improving the one-click management features that are already there. Also one-click operation insight that is focused on increasing the visibility of what is happening within your environment and lastly expanding Prism to perform one-click troubleshooting.

next17

 

The upcoming version of Prism later in 2015 is going to take the features we know and love today to the next level along with adding a bunch of new ones. The slideshow below gives a little sneak peak on what we can expect. The VM Management offering is being expanded, there is going to be a L2 network configuration option to assist with cluster deployments. The storage management capabilities are being improved. And lastly the cluster management, one-click type upgrades that we like today will continue to be a theme for other topics.

This slideshow requires JavaScript.

 

Operational Insight Features

This operation insight area of Prism is going to be a big section of improvement and new features over the existing versions. We can to see a Service Impact Analysis, that will help correlate alerts to application impact. The upcoming Root Cause analysis will help isolate issues and lean on knowledge from the Nutanix global community. And a remediation advisor will give targeted recommendations for how to optimize the solution.

This slideshow requires JavaScript.

 

I’m excited to announce and see that capacity planning is going to be coming to the Prism product in Q4. I think that this is an important addition for Nutanix and it looks like they took their time to release a great product the first time around. There will be a capacity trends behavior set of features to report on what is going on in your environment and provide time estimates. Also an optimization advisor will be there to help you find space that might be able to be reclaimed, such as unused VMs, snapshots and other items. And last up a What-if analysis will estimate what resources may be needed for different scenarios. The cool option is this features will have the ability at some point to send data into the Nutanix sizing calculator to allow for a better sizing and scaling exercise.

This slideshow requires JavaScript.

 

A new search feature called Prism Instant Search will be coming also. The image below shows what some of the things you can do with the search feature. Think of it as a Google like search experience that allows Nutanix admins to find and accomplish tasks faster. I’m looking forward to seeing

next28

 

Last up is the ability to create customizable dashboards in Prism. I’ve had a few customers looking for this ability and they will be happy that it’s coming later this year. The image below looks like a great example that show specific workloads and other alerts for a focused view that a specific role would be interested in. I think that I also heard that real-time stats would be available from Prism, today there is a slight time delay or a page refresh to get the updates.

next29

 

As you can see there is a lot of new and updated news from Nutanix .Next. I think that most everything was covered, but I may have missed a few things which I will try and update as I hear of any.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Nutanix presents Community Edition to the world

Nutanix community edition (CE) is not the best or worst kept secret. A community edition of their storage platform is something that’s been loosely talked about for some time now and most recently as a few months ago Nutanix leadership mentioned it in some public interviews. Nutanix has been looking for a way to allow people in the tech community an easier way for them to get some hands on experience with their platform.

As a Nutanix Technology Champion (NTC) I was invited to participate in early trials of community edition with others NTC’s. We were provided with briefings on the CE product and provided access to install and participate in a private forum to provide feedback on the product and testing.

The community edition will become available to the community during the .Next user conference in early June. You can sign up here to get notified when CE is ready to go.

 

What is Community Edition

Simply put Nutanix community edition allows you to build a small hyperconverged cluster in your home lab. The product has a automated install. Although its not the same deployment method as the production product its still better than doing the work by hand. To try and provide flexibility in the hardware that people could use with CE they would not be able to use the same deployment that is used for their production HCI appliances.

With Nutanix CE you will be able to build a hyperconverged lab that consists of one to four nodes. This will allow people to build a single node Nutanix install for those that do not have a lot of hardware to play with the product. And those that have large labs can build a three or four node cluster for a larger install. This type of install could provide some serious power to people’s home labs.

The Nutanix CE install is only on bare metal today. It uses KVM as the hypervisor and deploys the controller VM (CVM) on top of it as normal. Home laber’s can then deploy VMs on the KVM cluster. After install you end up with a fully functional Nutanix cluster. It can dedupe, compress and perform awesome. There are some big plans such as the same one click upgrades like the production product.

The following is a list of minimal and recommended hardware specs, these are requirements today. I would expect these to loosen up and expand as the CE product matures.

  • Memory 16GB min – 32GB or more recommended
  • Intel CPU, VT-x , 4 core min
  • Intel based NIC
  • Cold tier storage 500GB min 1 per server (Max 18TB, 3x 6TB HDD)
  • Flash 200GB minimum min 1 per server
  • Max number of SSD/HDD drives per node is 4
  • Boot to USB drive or SATA DOM

nutanixCE

 

What I wish it was

I think that community edition is a great idea and hope that it’s a big success. But as a avid home lab person, I am not crazy about the idea of having to dedicate physical hardware for this. I know that I will get better performance and an experience more like the real product. But as a community product I think people just want to get their hands on it to play with it. This could be easier accomplished by offering it as a virtual appliance that I can run on my existing lab. No need to dedicate hosts and wipe disks. I can accept that a virtual appliance won’t give me the full experience and performance, but it will allow for more people to play with the community edition. I hope to see this as an option as CE matures.

I would like to say thanks to the Nutanix CE team for the experience and releasing a cool product to the community.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Atlantis goes HyperScale and enters the Hyperconverged market

In a move that might surprise some and not others, Atlantis is announcing the availability of a hyperconvered appliance. I like this move from Atlantis and I think it will offer a more appealing solution for many customers.

This HyperScale product is marrying a hardware appliance-based approach with their USX software defined storage solution. The appliance will be all flash based and come initially in two difference storage capacity options. This new offering brings a simplified and fast deployment process and single call support from Atlantis for the full stack.

To start Atlantis will support VMware vSphere and Citrix XenServer as hypervisors. One can only speculate on how soon they may offer support for others, such as Microsoft Hyper-V. The small group of XenServer users will rejoice as there is finally a hyperconverged offering for them.

atlantis1

 

What’s the Hardware?

So there will be a hardware appliance, what are the details and who builds it. Atlantis is taking an approach that is being taken by some other vendors lately. They are not just offering a single hardware option. Instead Atlantis is going to be offering HyperScale options on Lenovo, HP, Cisco UCS and SuperMicro hardware. There will be a tightly controlled number of models from each vendor and their specific configuration.

The HyperScale appliances will only be available through Atlantis channel partners. When the partner makes the sale they will order the specific server vendor SKU with maintenance. They will then also sell the customer the Atlantis HyperScale SKU and maintenance. The products will be built by the channel partner and delivered by the customer. This approach can allow customers to take advantage of existing pricing they might have with their approved server vendor.

The Lenovo, HP and Cisco hardware options will be based on 1U rack mount servers. The SuperMicro option is using the Twin Pro, which is a 2U four-node configuration used by other hyperconvered and storage vendors.

 

How does support work?

Atlantis will offer one call support for the HyperScale solution. This means anything from the hardware to the hypervisor and of course the USX storage layer. The server hardware support will be covered under the server vendors maintenance, Atlantis will have the ability to fill service requests to have hardware replaced on behalf of the customer. This allows for a single call to cover the solution without needing to call HP to get a drive replaced for example. This hardware maintenance approach allows Atlantis to immediately take advantage of the global service coverage that these server vendors have built out already, saving Atlantis from a long expensive process of building out support capacity themselves.

 

What are the configurations?

Initially there are two different storage capacity options. There will be 12TB and 24TB sizes available to start and possibly a 48TB option in the future. The 12TB model has 4x 400GB flash drives and the 24TB has 4x 800GB drives. You might be saying how are they arriving at those capacity numbers with so few drives? Atlantis is basis the capacity calculations on a 4 node configuration and factoring in a data reduction of 70% to achieve the published capacities. They are offering a capacity guarantee for the HyperScale offering. If customers are unable to achieve this level of data reduction, Atlantis will work with the customer to license or provide additional capacity. The flash drives are Intel S3710’s. The link below is to a PDF that explains the storage guarantee.

Atlantis_PN_HyperScale_Storage_Guar_0415_web

All of the server options will offer dual socket servers that will use Intel E5 version3 chips. The 12TB option offers memory options of 256 to 512GB of memory and the 24TB offers 384 to 512GB. A pair of 10GbE and 1GbE network connections will be available for each node.

In the initial offering, the minimum configuration will be 4 nodes. That’s one SuperMicro chassis or 4 1U servers from the other vendors. The unit of scale will be 4 nodes at a time to start. Atlantis will offer single node scaling after the initial minimum deployment as a roadmap item some time in 2015.

 

My Point of View

I like this move from Atlantis. The USX software defined storage option was attractive, but I always like the appliance-based approach much better. Vendors that tkat an appliance approach to these offerings are able to provide and better deployment, scaling, upgrade and operational story for their customers.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Creating a VMware Datastore on DataGravity storage

I have recently been evaluating and getting to know the initial storage offering from DataGravity. In short they offer a unique storage array that offers hybrid storage and storage analytics all in one simple and easy to use offering. As I work with the product I will probably write up a few blog posts on how to work with things. Expect a detailed review over at Data Center Zombie soon after the new year.

I’m finding the product to be very easy to work with and thought a simple walk through on how to create a new export that will be mounted as a VMware datastore would be helpful.

Step 1

Upon logging into the management page for a DataGravity array you will see the following welcome screen. I will be creating some new storage, so I will click on the storage choice to proceed.

DataGravity Datastore - Step 1

DataGravity Datastore – Step 1

 

Step 2

The storage choice displays a number of options once clicked on. These are the major functions for creating and managing storage on the array. Click on the Create Datastore to proceed with the task for this post.

DataGravity Datastore - Step 2

DataGravity Datastore – Step 2

 

Step 3

This first step of creating the Mount Point that will be a datastore is to provide a name, capacity sizing and an option description.

DataGravity Datastore - Step 3

DataGravity Datastore – Step 3

 

Step 4

This step is where you will grant access for the hosts in the clusters that will utilize this new datastore. The image shows that I have already added the first host and by clicking the blue plus button you can add the other hosts.

DataGravity Datastore - Step 4

DataGravity Datastore – Step 4

 

Step 5

The following image shows the process for adding another host. You can enter the host name or IP address for enabling access.

DataGravity Datastore - Step 5

DataGravity Datastore – Step 5

 

Step 6

The policy step is where you can select an existing Discovery Policy or create another. In short these policies govern how the data is analyzed and protected. Once ready, click the Create button at the bottom and it will then be ready to configured on the vCenter side.

DataGravity Datastore - Step 6

DataGravity Datastore – Step 6

 

Step 7

Now that the mount point is ready I have selected one of my vSphere hosts and will add NFS storage to it. I have provided the IP for the data path to the storage array. The Folder is the same as the mount point name that we created earlier. And the datastore name can be what you like, I have made it the same as the mount point name.

DataGravity Datastore - Step 7

DataGravity Datastore – Step 7

 

Step 8

Once all of the steps to create the mount point and it’s presented on the VMware side I have taken a look back in DataGravity to list the mount points on the array. From here you can see what was created along with details about capacity and protection policy.

DataGravity Datastore - Step 8

DataGravity Datastore – Step 8

 

Step 9

The last view here is looking at our new mount point created. I have moved a few VMs onto the datastore and details about them have already started to appear. DataGravity is VM-aware so you have access to more data than a legacy array would show.

DataGravity Datastore - Step 9

DataGravity Datastore – Step 9

 

By now you have an idea on how easy it was to create and presented a new datastore. The other functions on DataGravity are also very easy to use.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Tintri blows the top off with 3 new models and a bucket of features

Tintri has been selling storage arrays for almost four years now and we have seen them go from a single model offering to the current three models. Along the way they have added features and supported additional hypervisors. Today Tintri is blowing the top off of things and announcing their biggest update since coming out of stealth. If you would like to learn more about Tintri you can read the in-depth Tintri review over at Data Center Zombie.

The T800 Series

As part of todays release is a new set of storage arrays from Tintri. The T800 series is a line of bigger, faster and more VM storage arrays. I like the name T800, kind of reminds me of another T800 that kicked a lot of butt in the Terminator movies. The T800 series is utilizing Tintri’s new compression feature on the capacity tier to offer greater effect capacity. The compression is all done in-line and the values are based on a modest 2:1 sample. Every model in the 800 series offers more flash than the pervious 600 series of arrays.

  • T820 – 23TB effective capacity and 750 VMs
  • T850 – 66TB effective capacity and 2000 VMs
  • T880 – 100TB effective capacity and 3500 VMs

T800

 

Tintri OS 3.1 Updates

Along with the new hardware todays announcement covers the new Tintri OS version 3.1. This update brings a number of new features along with bringing support for the new array models. The latest OS update is applicable to both the 600 and 800 series of storage arrays.

SRM support – Tintri was already offering storage based replication on a per-VM basis which was a great feature. The new update brings support for VMware SRM to allow for the automated fail-over and recovery of VMs.

Encryption – Another new feature is data at rest encryption on all T800 and T600 series storage arrays. Tintri is using self encrypting drives in these arrays and OS version 3.1 brings internal key management. At some future date we will see Tintri offer support for external key management options.

Automation updates – In the previous OS update Tintri released their PowerShell support as the first step in allowing admins and applications to automate the storage platform. With todays news they are announcing the brand new REST API interface. This will allow customers to further build automation and interface with the storage in new ways.

Hyper-V supporting is coming soon. In my briefing with the Tintri product team they mentioned that Hyper-V support is expect to be released in late Q4 of this year.

 

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 512345
%d bloggers like this: