What is the entry cost for Nutanix Community Edition – CE

Last week Nutanix officially announced the Community Edition of their platform. They let everyone know that a private Alpha and Beta test period had already been completed and was starting a public Beta period. The public Beta will allow users to sing up and Nutanix will open up the Beta to a new set of testers each week. This will allow them to onboard new testers each week without over burdening the process and causing confusion.

The Community Edition (CE) is a software-only version of the Nutanix hyperconverged platform. It currently only runs on KVM for the hypervisor and must be installed on bare metal hardware. This left some confused and wondering what the cost of testing this community version would be. I was happy to see that there are options for testing a single node CE install along with a 3 or 4 node install. The fact that a single node install is allowed offers a pretty low cost of entry. This is the route that I choose for my testing since many of my lab servers are AMD based. I did have an Intel-based server and a few whiteboxes that were potential candidates for the testing.

Following some of the conversations online about CE, it seemed like some felt that the cost of entry for playing with CE was too high. While I do agree that a nested version that could be deployed as a virtual appliance is on my wishlist, the bare metal option is nice also. I’m planning on using CE as a long term storage option in my lab over the Nexenta box that I was previously running.

Now on to the costs. I used an HP ML150 G6 server that I purchase about two years ago for around $400. It has just a single Intel E55xx series CPU in it and 24GB of memory. The built in NICs are supported by CE as well as the storage controller. I had a consumer grade Samsung 830 SSD drive and a pair of 2TB HDD’s that I was using in my old build. So the total build for me was probably between $700 to $1000 at most. I wanted to test on my whitebox build that is around $500 but a failed CPU is causing a delay.

I did a little Ebay searching today and saw that Dell R610 and R710 servers are pretty cheap on average, around $350 – $500 with the right CPU’s and amount of memory. All that you would need to do is add the right drives if they don’t have them already. So I think that this is a pretty reasonable cost for an advanced product. I know many people’s home servers may already easily meet these requirements.

Long term I’m going to be thinking about how I can design CE to be my main storage in the home lab. It will likely be a non-standard approach of running CE on KVM and presenting it externally to my vSphere clusters. But if it gets the job done I’m fine with the behavior. I may use a pair of single node installs and replicate between them for my data protection strategy saving me from burning a 3rd server. This should save me from spending several thousands of dollars on a higher-end Synology NAS device and all the drives. Also gives me some cool features that a home NAS won’t provide.

If you are running CE and have a reasonably priced build drop it in the comments and share with others.


Read More

Nutanix presents Community Edition to the world

Nutanix community edition (CE) is not the best or worst kept secret. A community edition of their storage platform is something that’s been loosely talked about for some time now and most recently as a few months ago Nutanix leadership mentioned it in some public interviews. Nutanix has been looking for a way to allow people in the tech community an easier way for them to get some hands on experience with their platform.

As a Nutanix Technology Champion (NTC) I was invited to participate in early trials of community edition with others NTC’s. We were provided with briefings on the CE product and provided access to install and participate in a private forum to provide feedback on the product and testing.

The community edition will become available to the community during the .Next user conference in early June. You can sign up here to get notified when CE is ready to go.


What is Community Edition

Simply put Nutanix community edition allows you to build a small hyperconverged cluster in your home lab. The product has a automated install. Although its not the same deployment method as the production product its still better than doing the work by hand. To try and provide flexibility in the hardware that people could use with CE they would not be able to use the same deployment that is used for their production HCI appliances.

With Nutanix CE you will be able to build a hyperconverged lab that consists of one to four nodes. This will allow people to build a single node Nutanix install for those that do not have a lot of hardware to play with the product. And those that have large labs can build a three or four node cluster for a larger install. This type of install could provide some serious power to people’s home labs.

The Nutanix CE install is only on bare metal today. It uses KVM as the hypervisor and deploys the controller VM (CVM) on top of it as normal. Home laber’s can then deploy VMs on the KVM cluster. After install you end up with a fully functional Nutanix cluster. It can dedupe, compress and perform awesome. There are some big plans such as the same one click upgrades like the production product.

The following is a list of minimal and recommended hardware specs, these are requirements today. I would expect these to loosen up and expand as the CE product matures.

  • Memory 16GB min – 32GB or more recommended
  • Intel CPU, VT-x , 4 core min
  • Intel based NIC
  • Cold tier storage 500GB min 1 per server (Max 18TB, 3x 6TB HDD)
  • Flash 200GB minimum min 1 per server
  • Max number of SSD/HDD drives per node is 4
  • Boot to USB drive or SATA DOM



What I wish it was

I think that community edition is a great idea and hope that it’s a big success. But as a avid home lab person, I am not crazy about the idea of having to dedicate physical hardware for this. I know that I will get better performance and an experience more like the real product. But as a community product I think people just want to get their hands on it to play with it. This could be easier accomplished by offering it as a virtual appliance that I can run on my existing lab. No need to dedicate hosts and wipe disks. I can accept that a virtual appliance won’t give me the full experience and performance, but it will allow for more people to play with the community edition. I hope to see this as an option as CE matures.

I would like to say thanks to the Nutanix CE team for the experience and releasing a cool product to the community.


Read More

Atlantis goes HyperScale and enters the Hyperconverged market

In a move that might surprise some and not others, Atlantis is announcing the availability of a hyperconvered appliance. I like this move from Atlantis and I think it will offer a more appealing solution for many customers.

This HyperScale product is marrying a hardware appliance-based approach with their USX software defined storage solution. The appliance will be all flash based and come initially in two difference storage capacity options. This new offering brings a simplified and fast deployment process and single call support from Atlantis for the full stack.

To start Atlantis will support VMware vSphere and Citrix XenServer as hypervisors. One can only speculate on how soon they may offer support for others, such as Microsoft Hyper-V. The small group of XenServer users will rejoice as there is finally a hyperconverged offering for them.



What’s the Hardware?

So there will be a hardware appliance, what are the details and who builds it. Atlantis is taking an approach that is being taken by some other vendors lately. They are not just offering a single hardware option. Instead Atlantis is going to be offering HyperScale options on Lenovo, HP, Cisco UCS and SuperMicro hardware. There will be a tightly controlled number of models from each vendor and their specific configuration.

The HyperScale appliances will only be available through Atlantis channel partners. When the partner makes the sale they will order the specific server vendor SKU with maintenance. They will then also sell the customer the Atlantis HyperScale SKU and maintenance. The products will be built by the channel partner and delivered by the customer. This approach can allow customers to take advantage of existing pricing they might have with their approved server vendor.

The Lenovo, HP and Cisco hardware options will be based on 1U rack mount servers. The SuperMicro option is using the Twin Pro, which is a 2U four-node configuration used by other hyperconvered and storage vendors.


How does support work?

Atlantis will offer one call support for the HyperScale solution. This means anything from the hardware to the hypervisor and of course the USX storage layer. The server hardware support will be covered under the server vendors maintenance, Atlantis will have the ability to fill service requests to have hardware replaced on behalf of the customer. This allows for a single call to cover the solution without needing to call HP to get a drive replaced for example. This hardware maintenance approach allows Atlantis to immediately take advantage of the global service coverage that these server vendors have built out already, saving Atlantis from a long expensive process of building out support capacity themselves.


What are the configurations?

Initially there are two different storage capacity options. There will be 12TB and 24TB sizes available to start and possibly a 48TB option in the future. The 12TB model has 4x 400GB flash drives and the 24TB has 4x 800GB drives. You might be saying how are they arriving at those capacity numbers with so few drives? Atlantis is basis the capacity calculations on a 4 node configuration and factoring in a data reduction of 70% to achieve the published capacities. They are offering a capacity guarantee for the HyperScale offering. If customers are unable to achieve this level of data reduction, Atlantis will work with the customer to license or provide additional capacity. The flash drives are Intel S3710’s. The link below is to a PDF that explains the storage guarantee.


All of the server options will offer dual socket servers that will use Intel E5 version3 chips. The 12TB option offers memory options of 256 to 512GB of memory and the 24TB offers 384 to 512GB. A pair of 10GbE and 1GbE network connections will be available for each node.

In the initial offering, the minimum configuration will be 4 nodes. That’s one SuperMicro chassis or 4 1U servers from the other vendors. The unit of scale will be 4 nodes at a time to start. Atlantis will offer single node scaling after the initial minimum deployment as a roadmap item some time in 2015.


My Point of View

I like this move from Atlantis. The USX software defined storage option was attractive, but I always like the appliance-based approach much better. Vendors that tkat an appliance approach to these offerings are able to provide and better deployment, scaling, upgrade and operational story for their customers.

Read More

The differences in communities

I’ve noticed something over time and was having a conversation with my wife and this thought came to mind. Last year I decided to purchase my first dirt bike, I really have not ridden once since I was a teenager. But after watching my son ride his for the last couple of years, it seemed like it was time for me to jump in. I did not want to miss out on the fun any longer and riding together would be a great way to spend more time together.

Both of our bikes were purchased used, so there was been some minor fixes that we have been working on. I’m a bit geeky and do not have much of a mechanical background, so we like to do research on the internet before starting a project. If the project is too big or complex we might just take it to a local shop.


Motorcycle Community

What myself and my wife noticed is that the motorcycle community is a bit unique. In researching different topics and asking for advice we were amazed at how helpful people are. There is a bit of a brotherhood amongst the motorcycle community, whether online or in person. I’ve always noticed that bikers always wave to each other when riding on the streets, they don’t know each other but they respect each other. They don’t ignore a guy riding a BMW bike with a suit and tie on versus a hardcore biker dude with a Harley. They all wave, no judgments.

We also noticed that web pages with comments and forums focused on dirt bikes had a very similar brotherhood. People would ask questions on how to fix something or modify something and they were not attacked for their choices and called stupid. No one was saying why are you doing it like that your an idiot. They just helped, people offered suggestions gave helpful feedback. This behavior was observed across all types of sites, was not isolated to a single forum.

I had recently had pulled my bike out of the garage and was giving it a check over before getting it ready for the spring riding season. Checking the fluids and other routine maintenance and then was working on getting it started the first time. It was not cooperating, I was probably out there for an hour and totally frustrated. Up the driveway came a new neighbor that had moved in over the winter. We had not had a chance to meet him, he was a fellow dirt biker and offered some tips. Within a few minutes, it was running and I could have not been happier.


Tech Community

Now let’s have a look at the tech community. Don’t get me wrong there are a lot of very nice people that are more than willing to help others, especially in the VMware community. But there always seems to be a small group of people that will instantly jump into flame or attack mode, just because they think that some is taking the wrong approach or choosing a product that they don’t like.

Not everyone approaches problems in the same way, they are not looking to be attacked because they are looking for advice. People seek out others online to try and learn from their experiences or get help on how to approach or fix something. Many times you are not the first person to have a specific issue, and there are plenty of helpful people that answer posts on forums and write helpful blog posts. But in the tech community and others there is a healthy amount of people that want to do nothing else but jump in and cause trouble. They attack people for their methods and choices without offering anything helpful to the conversation.


So why?

I’ve often wondered why people exhibit this type of behavior? There are probably a lot of different reasons. Is it that the motorcycle community is just more confident in themselves? They don’t need to try and prove they are smarter than the next person? Do they have no interest in being an internet tough guy that hides behind his keyboard and feels superior? I have no idea why.

I’m pretty sure that we also don’t see executives from Honda writing biased blog posts against Kawasaki or attacking each other on Twitter. They are both focused on building an excellent product and taking care of their customers. Having a war with your competitors does nothing positive for your customers. It does not affect their buying decision in a positive way, it makes you look petty or rude.

So the next time that you are about to jump on someone for something online, maybe take a deep breath and think about what they are really asking for. You have a few choices, you can be a mature person and just keep your thoughts to yourself or you could offer something positive or helpful to the conversation. There is nothing to be gained by attacking others.

Don’t read too much into this blog post, it was not inspired by any recent trash talking or blog posts. Just a few thoughts I had on this recently, hopefully I don’t get too attacked in the comments :)

Read More

VMware Horizon 6.1 brings new features and a peak at the future

Today brings another update to VMware Horizon, version 6.1 is being announced. With this update comes several new features and a peek at a few others expected in a future release. The NVIDIA GPU support is the worst kept secret, since it was announced that vSphere 6 would have vGPU support. It was only going to be a matter of time until Horizon was updated to take advantage of the new vGPU feature.

Note: Some of the tech preview items will only be available via the public VMware demo site or via private requests. Not all tech preview items will be included in the GA code like many have been in the past.

The summer of 2014 saw the release of Horizon 6.0 and the ability to present RDS based applications. It was missing a number of features and VMware quickly closed the printing gap in 6.01. Today in 6.1 we are seeing several new features which I will cover in more detail. A few other features will enter tech preview mode and are likely to be released in an upcoming version.

new features


 USB Redirection

In 6.1 the ability to redirect USB storage devices for Horizon applications and hosted desktops will now be available. This helps close another gap that existed. It will only be available in 2012/2012R2 OS versions.

usb redirect


Client Drive redirection

This is something that has been available in Citrix XenApp since the stone ages. It will only be available in tech preview for now, but I’m sure we will see this some time this year. Initial support for Windows only clients with other OS’s coming later.

client drive

Horizon Client for Chromebooks

The current option in you want to use a Chromebook as your endpoint is to access Horizon via the HTML 5 web browser. This limited you to only connect to a desktop, because Horizon apps were not supported over HTML5. Without a proper client pass-thru items such as USB devices were not possible either.

The Horizon client for Chromebooks will be based on the Android version that has been around already. There has been growing demands for this client. This will be available as a tech preview sometime in Q1/Q2 of 2015.

Cloud Pod updates

The cloud pod architecture was released last year to provide an architecture for building a multi-site Horizon install. The initial version was not that attractive in my eyes. The updated version in 6.1 brings the configuration and management parts of cloud pod into the horizon manager. The previous version had to be done via command line and global entitlements were not shown in the Horizon manager.

Other Items

We are also see a number of other check the box type items that are expected due to vSphere 6 updates.

  • VVOL support Horizon 6 desktops
  • VSAN 6 support
  • Large cluster size support for VSAN6 and higher densities
  • Support for Windows 2012R2 as a desktop OS
  • Linux VDI will be a private tech preview option





Read More

Configure Active Directory authentication for Nutanix Prism

The more I work with Nutanix the more I learn and like about the product. There have been a few things that have been on my to do list lately and a few ideas spawned from customers. So I will be writing up some articles about these topics and enable AD authentication is the first one.

In this post I will walkthrough the steps needed to enable AD as a source for authentication. You will still be able to use local accounts if you wish.

Configure AD source

The first step here is to create a link to the AD domain that we wish to use for authentication. Use the settings icon in the upper right of the Prism interface for Nutanix. Find and click on the Authentication choice as shown below.



This will open a new window that will allow you to configure a new directory source. As shown in the image below click the button to configure the details for your AD domain.



On the first line you will input a friendly name for the domain, this did not seem to allow spaces. The second line is the actual domain name. The third line is the URL for the directory and needs to be in the format shown below. I used an IP address to keep things simple in the lab. The fourth line will allow you to choose the type of directory, currently it only support AD.



Once you have input the AD details and saved them you will be taken back to the following screen with a sample shown below. It should now list summary information about the AD domains configured for Prism. In my tests I configured two different domains.



Role Mapping

The idea of role mapping is to select an individual AD entry or group and assign them a level of access in Prism. You get this started from the settings menu again, by selecting Role Mapping shown below.



A new pop-up window will open shown below. Click on the new mapping choice to get started.



From here the first line you will choose which AD domain you will be using for this role mapping. The second choice you must choose what you will be mapping to, the options are AD Group, AD OU or a user. The third choice is what role in Prism will you be assigning the mapping. In the values field you will need to input the name of the AD item you will be mapping to. I choose group so I need to input the AD group name.

Note: It will accept inputs that are not correct, meaning it does not seem to validate them. I input the group name in all lowercase, this did not work but was accepted. I came back later and changed to reflect capital letters as shown in AD and it worked right away.



After entering and saving your new mapping the screen below shows the new entry. You can add more mappings, edit or delete an existing mapping from here also.



The image below just shows the proper group name after I came back and updated.



Next it was time to try and authenticate to Prism. So I attempted to login using the different methods of entering a user name. It does work with the username@domain.name string, but did not like the domain_name\user.name option.



And once logged in, the upper right corner of Prism shows the authenticated user. It was now showing my username.



Overall the process was pretty simple for setting this up. I had it working in about 15 minutes.


Read More

No one application delivery strategy to rule them all

This topic is something that has been coming up more often in conversations with customers when talking about architecture of a modern EUC environment. Enterprises are looking for better ways to manage computers that their users rely on for their work each day. A big portion is application functions such as installing, updating and controlling access. A common request is I don’t want multiple ways to do this type of work, one approach is the desire. To that I say

“No one application delivery strategy to rule them all”


onering Unico_Anello

I understand the desire to have one master way to package and deliver applications, especially for large clients. There are plenty of options for doing this. I think that depending on which method you choose it might be ideal for the physical world but break many of the benefits in a virtual world or vice versa. A customer that might have 100,000 users, but only intends on virtualizing 20,000 users. They will be left with two very large environments to manage.

The physical computer environment is very static, customers tend to push applications to computers. This push typically does not need to closely follow the provisioning of the PC. There can be a bit of a delay for the apps to install. Customers are exploring other options such as RDS based options and application virtualization such as AppV, ThinApp and others to help with these issues.

In a virtual desktop environment desktops are provisioned quickly and applications need to be present and ready at the time of user login. There typically is not time in most environments to wait for the classic application push approach, because the desktop may be disposable and would need a push every day or more. Users will also not be willing to wait for the apps to appear after login. Vendors like VMware and Citrix have built multiple options for delivering applications at the point of desktop creation or user login.

The problem breaks down to if you move your legacy physical strategy into the virtual world you will break or loose some of the features and values that virtual desktops delivers. If you want to adopt the tools from VMware or Citrix you will then have to license this application technology for all of the physical devices and that can be very expensive.

This is why I think that people need to be comfortable with having a two strategies. One to modernize their physical PC environment and one for the virtual desktop environment and seek to offer the best and most complete offering in each space. This may or may not require you to package apps twice, but will result in you being able to provide the best possible solution.




Read More
Page 1 of 4812345...102030...Last »
%d bloggers like this: