People VMware vCAC is not easy, it takes effort to get value

I tend to get the feeling that many customers expect that if they purchase vCloud Automation Center (vCAC) some type of magic will happen. Maybe it’s a disconnect in how the product is marketed or something else, but the message is not getting through. I’ve noticed this through customer meetings before a sale or during the architecture phase of a project.

vCAC as a product can be very powerful and does allow for a lot of flexibility to solve complex problems. But vCAC does not contain a bunch of magical workflows that will automate your data center out of the box. The product offers a self-service portal, a service catalog, machine blueprints, ties in with automation tools and the ability to surface data from ITBM for business reporting.

If you want to do anything beyond allowing someone to self provision a template from vCenter, you need to do custom work. This work will be creating blueprints to build custom properties and logic and tie in with vCenter Orchestrator or another tool for the more complex tasks. This is where all the magic is accomplished. The point to drive home here is that just installing vCAC does not win you the battle or give you a feature rich “Cloud”. You will need to invest a lot of time or hire someone to build out the automation and orchestration that provides the real value in the solution.

I don’t intend to scare anyone way, but rather just clear up what should be expected from a project based on vCAC from a high level.

HARD-WORK

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Platform9 looks to bring OpenStack features to the Enterprise

A new company is coming out of stealth today and is called Platform9. Platform9 is the creation of several of the earlier engineers from VMware. This team had their hands in creating the great features that enterprises use everyday. Their new project is set to bring OpenStack features and functionality to Enterprises without all the challenges.

Platform9 is a SaaS based management platform that offers OpenStack API compatibility. The product is built on OpenStack with custom management interfaces. The team is starting out with KVM support out of the gate and VMware vSphere coming later this year. Think of it as a cloud management layer in the cloud to manage your on-site resources.

Below is a few highlights from the press release that can be read in full here.

Platform9 is the easiest way for enterprises to implement a private cloud, with intelligent, self-service provisioning of workloads onto their computing infrastructure.
  • 100% Cloud Managed: Platform9’s cloud-based model means that there is no complex management software to setup, monitor and upgrade, thus simplifying the operational experience.
  • Single Pane of Glass: Platform9 offers unified management across diverse environments – Docker, KVM and VMware vSphere – across datacenters and geographies.
  • Based on OpenStack: Platform9 customers get the best of OpenStack with 100% API compatibility.

 

Platform9 Infrastructure View

 

 

 

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Do IT departments need to start hiring PR reps?

It’s no secret that there is a communication and sometimes a social gap between the folks in the IT department and the executive team and end users. The reason for the gap exists is due to many reasons of which some included the inability to properly describe issues to non technical people.

This gap has always existed but seems to be widening in recent years. I think as people continue to adopt cloud services and the app store model they find it harder to understand why IT teams still operate in the way they do. The business and end users will continue to demand services with short turnaround times and will want to be offered more choices than they had in the past.

happy-it-crowd-dance

To deal with this I think that the IT staff needs to learn new skills and become more adapt in messaging their services and describing outages to the business and users. This will help with how they are perceived and allow them to become the people there to help people work more efficiently rather than be seen as the creepy bunch of people locked away in the basement cubicles.

How to fix?

One option would be to send IT managers and team members to some type of communication workshop to try and improve their methods. I think this option would just fail 99% of the time. So the option most likely to succeed would be for the IT department to hire a PR rep. The IT PR person would work closely with the CIO and directors to craft the mission statement for the department and what their services are. Here are some things that would help with the perception of the IT staff.

  • Communicate what services are available to the business and how they can and should be used.
  • Properly communicate status of operations during service interruptions to the business.
  • When root cause is identified communicate to leadership what the cause was and what steps were taken to mitigate the issue. (Example would be an Microsoft OS patch causes an issue with a number of servers. Did the team do testing in the lab ahead of deploying to production and such.)
  • When the business is demanding some new cloud based type service. Help them understand what is available to them today and help with determining if the new solution is a valid option.

I know this is a radical idea and problem seems crazy to many. But I hear from a large number of questions that they experience these types of issues on a regular basis. During a recent project one of my teammates mentioned this idea and it really got me thinking. If you think I’m crazy or onto something drop a note in the comments.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Why Tintri is a great storage choice for vCloud director

Some of the characteristics of a cloud make it a great solution for customers but introduces some new challenges on the management side. Much like back in the virtualization days the hypervisor abstracted the hardware from the virtual servers, cloud is now abstracting the hypervisor from the tenants of the cloud. This as of today is by design and creating an environment that is easy to use also removes many of the management or provisioning choices from the users.

But by doing this you are also limiting some of the features or performance decisions that users may need or demand out of a cloud design. I think today that the features are not yet there within most cloud platforms to help users make performance related decisions easily or be able to create complex custom virtual machines. Let’s talk through a few of these items below and why I think Tintri is best positioned to solve many of these items today.

 

Complex Cloud VM’s

So what do I mean by a complex cloud VM. The best example that I can give for this scenario is a database server. The typical config that follows best practices is to separate the various parts of the database server onto separate virtual disks that are preferably on discrete LUNs that meet the performance characteristic of each part of the database. This would mean that the OS drive, database drive, temp DB drive and logs drive would all be on individual virtual disks and would be matched up with a datastore backed by the recommended Raid level. Along with selecting the right raid level would be to consider the number and type of disks used in these disks.

Today vCloud and I would say most other cloud stacks do not fully account for the performance requirements for these types of complex workloads when being provisioned from a catalog or at all. There will probably be some people that will say this is possible in vCloud but not without manual intervention from an admin in vSphere. My point is this process should be hidden from the cloud consumer and happen when they self-provision a VM.

The approach that many typical arrays take would be to provision Tiered storage and present it as an Org Virtual Data Centers (vDC) in vCloud. Then force the user to select a tier which to place the entire VM on when provisioning, and this is certainly a viable option. You could use different types of storage like an all Flash based array or something that auto tiers storage blocks on a scheduled basis to try and create pools of storage. While these are valid options I think they add extra complexity to things and not a very granular approach, meaning you usually have to make performance based decisions on an entire LUN or pool of storage.

Now you are probably wondering how is Tintri going to make this any simpler for me. I will not give you the entire Tintri value pitch as that’s not my job, you can review their features here. In a vCloud design specifically, Tintri solves this complex VM issue by offering you a storage appliance that is VM-aware. First Tintri uses it’s finally tuned methods to serve nearly all of your IO (both reads and writes) from its flash storage. This is very helpful because you will have clear visibility into which virtual machines are using higher amounts of storage IO even down to the individual virtual disk. And the icing on the cake is that since Tintri is completely VM-aware, if you needed to guarantee the highest performance for a specific VM or virtual disk you have the ability as an admin to pin the VM or disk to flash, thus guaranteeing the best performance for the virtual machine. I received a reply from Tintri that explains while possible to manually pin VMs to flash, under normal use this is not need as their sub VM QoS along with the flash hit percentage they can provide all workloads the throughput, IOPS and low latency they need. Today this type of granularity is not possible from other storage vendors.

So I tried to find a graphic that would visualize the awesome NFS performance that Tintri can provide, but when you search for NFS the results are heavily polluted with Need for Speed images. So who does not love a good burnout picture?

Why NFS is just a solid choice for vCloud

OK so it’s not like Tintri is the only NFS player in the market. But by choosing NFS as their storage protocol I think that they match up well with creating clean vCloud designs. Much with the discussion above about creating Tiers of storage for performance you will also most likely be creating multiple datastores if you are using block based storage. The need for multiple datastores could be driven by decisions and factors such as multiple tenants in your cloud and limiting factors such as size or number of VMs per datastore.

NFS allows you to be more flexible by using fewer larger exports or in Tintri’s case a single NFS datastore per Tintri appliance. This reduces waste by not having to cut up your usable space into so many smaller parts for various design decisions. Now there are sure to be some valid reasons that you would need to have those divisions and Tintri cannot solve everything. But for many of the clouds being built today by corporations that are mostly private in nature tend to benefit from what Tintri could offer to their design.

Oh and not to forget the benefit that NFS by default is thin provisioned and saves space on your storage without having to use Fast Provisioning in vCloud that does not fit well with all designs.

To wrap all this up you should have these types of conversations with your VMware partner, vendor or internal teams to evaluate what might be the best solution based on the requirements of your cloud design. But no matter what your needs are, I think that you should take a serious look at what Tintri might be able to help you solve in the areas of performance and ease of management.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

VMware needs to integrate Orchestrator into vCloud Director more to improve Cloud automation

In working on several Cloud related projects one of the items that sticks out to me is the need for deeper automation within the vCloud Director product. I understand this is still just version 1.5, but with how hard VMware is pushing the “Your Cloud” journey. I think that some parts are just not ready for what some companies need to do in the way of automation.

If self-service is suppose to be such a big part of Cloud, then the need for automation is going to play a big part. Not everything can be accomplished from creating templates and using customization to change the identity of the new VM. In server virtualization this worked great and saved time for most IT shops. But there were still manual processes that some shops needed to do. This breaks the idea of self-service IT, if a user still relies on someone to execute a manual process to have a VM or application provisioned from vCloud.

I guess what this mostly deals with is private cloud. Many IT shops are trying to automate the creation of as many servers and platforms as possible, to reduce their work load in provisioning new servers. But there are still some manual processes that need to take place and I think that being able to tie vCenter Orchestrator more tightly with vCloud Director could go a long way in help this issue.

Other cloud software companies such as DynamicOps are already doing this type of thing. By making the workflow or automation part of their offerings built into the same admin console. This allows for tight integration and opens up the options for what you are allowed to automate.

If you listen to rumors and in dark alleys you might hear that this type of integration is coming from VMware in a future release. Nobody knows if it will be the next release or even when that will happen.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Everything you wanted to know about HP BladeSystem Matrix

With all the talk about converged infrastructure and stacks these days especially in the virtualization space I was really glad that I got to do this interview. There has been a lot written about its competitors but the HP BladeSystem Matrix was still kind of a mystery to anyone that had not had HP in to talk about it. I was lucky enough to spend some time talking with a couple of members from the HP BladeSystem Matrix team. These guys were very helpful in explaining what Matrix is and answered all of my crazy questions.

What I hope everyone gets from this is a better understanding of what BladeSystem Matrix has to offer if you’re looking at these types of converged offerings. Also highlight some of the features that are unique to the HP stack. In interest of being totally open I am also an employee of HP but my current work responsibilities have nothing to do with BladeSystem Matrix. Now that all that is out of the way let’s get started with the good stuff.

VT: Can you give me your elevator pitch?
HP
: Matrix is the foundation for a private cloud solution managing both physical and virtual infrastructure. Matrix allows you to rapidly provision infrastructure via a self service portal. In addition, it offers the ongoing life-cycle management including capacity planning and disaster recovery. You can buy Matrix with a single SKU that includes hardware, software and services. The solution is all tested and certified by HP to work together.

VT: Who benefits from this solution?
HP
: Customers who need to be able to address fast change and achieve a competitive advantage through time to market. Typical customers for Matrix are large Enterprises and Service Providers who have invested already in virtualization and shared infrastructure and want to take the next step to cloud computing. I think that these target customers are common to all converged infrastructure offerings.

VT: What hardware makes up a BladeSystem Matrix?
HP
: BladeSystem Matrix all begins with something called a starter kit. This kit includes the following items, Central Management Server on a ProLiant DL360, HP C7000 Blade Chassis w/Virtual Connect networking and Insight Management software for managing Matrix. For the storage you have multiple options – you can use your existing Fiber Channel SAN storage if it’s supported or you can use HP storage, e.g. 3PAR or HP EVA 4400 array. iSCSI storage is supported as well for VM data stores. There is also something called an Expansion kit which is a C7000 Blade chassis, Insight Management software licenses and HP Services needed to integrate the expansion kit into your existing Matrix environment. It should be noted that Matrix supports both ProLiant and Integrity blades.

VT: What are HP Cloud Maps and how do they relate to BladeSystem Matrix?
HP
: These Cloud Maps help customers to get started quickly with Matrix – they jump start the creation of a customized self-service portal.  Cloud Maps include white papers and templates for hardware or software configurations that can be imported into BladeSystem Matrix that can save days or weeks of design time. A Cloud Map can also provide workflows and scripts designed to expedite the installation.

VT: What does the CMS or Central Management Server do?
HP
: The CMS server is a physical server that is running the management software that controls, automates and monitors your BladeSystem Matrix. If you have a DR site with a Matrix you would need a CMS server there to control the environment. It’s also possible to setup the CMS in a HA or Highly Available configuration to prevent a single failure point for Matrix management. Lastly for large environments that exceed the maximums of a single CMS you can now stand up secondary CMS servers that will still allow you to manage everything from one admin console.

VT: Can I use existing HP gear with a Matrix install?
HP
: If you purchase a new HP BladeSystem Matrix you can use it to also manage any qualifying HP hardware that you already own. HP has created something called the Matrix Conversion Services to assist with integrating your existing HP infrastructure with BladeSystem Matrix. This program is new and will evolve to allow customers to accomplish these integrations.

VT: Can I use arrays from other vendors?
HP
: You can use Storage Arrays from other vendors as long as they are able to meet a list of criteria – for example the storage vendor needs to be certified with Virtual Connect.  More details can be found in the Matrix compatibility chart.

VT: What software is used for Matrix?
HP
: The software for Matrix is called the Matrix Operating Environment, which includes the whole Insight Management stack including Insight Foundation and Insight Control. With Insight Foundation you get the controls to install, configure, and monitor physical servers. With Insight Control you get all the essential server management including server deployment and power management. The real magic happens with the additional Matrix Operating Environment software (aka Insight Dynamics). It provides a service design tool, infrastructure provisioning with a self-service portal, capacity planning, and recovery management

VT: Does it come configured and who does the setup work?
HP
: Some factory configuration is done then remaining work is done onsite by HP Services. The install and configure period can take from a few days to 2 weeks depending on the level of complexity.

VT: Explain how it is managed?
HP
: There are two separate consoles that control a BladeSystem Matrix. The first would be the admin console used by your support team to configure and control the environment. The second would be the Self Service portal site. This allows for IT consumers to request and provision resources from the Matrix environment.

VT: What types of automation and provisioning can Matrix do?
HP
: One example would be in the creation of templates. You can create templates in the Matrix software or use ones already created, for example on your VMware vCenter server. If you use an existing template that might be created with only one OS partition you can use the Matrix template process to provision the VM and add on additional disks and features not present in the base template.

VT: How is support handled for Matrix customers?
HP
: There is a dedicated team to contact for Matrix support issues. Matrix is treated as a single solution, with all calls coming in through a central team. This team is cross trained in the various aspects that make up Matrix and they will escalate to product specific engineers if needed.

VT: Can you explain fail over P2V and then back to V2P for DR?
HP
: This feature allows for a physical server to be recovered at the DR site on a physical or virtual machine. To make this HP spoke about creating what is known as a “portable image” What this meant was that the logical server was created in a way that it would be able to be deployed on either another physical blade, or as a VM within a virtual machine host. . I asked about if there was any type of conversion process that takes place and there is not. The engineer talked about the creation of the portable image which to me meant that you need to include both OS drivers for the physical hardware and the virtual hardware. This way when the imaged was moved to the other platform the physical OS or the hypervisor-based OS would find all of the devices. The last piece would be the network settings and these are preserved with an application called PINT so that when new network cards are installed your settings will remain.

VT: How does it integrate with VMware?
HP
: The HP tool set for BladeSystem Matrix offers many integration points with VMware vSphere. A short list of the functions would include provisioning VM’s, change in power state, activate/deactivate, add servers to group, and add disks to a VM or group of VM’s. Along with those features Matrix monitors status and performance, capacity & workload analysis and Disaster Recovery integration.

VT: What separates Matrix from other converged stacks?
HP
: A big selling point is that HP BladeSystem Matrix is integrated and engineered holistically by one company, while still allowing for heterogeneous components in areas such as networking and storage. Also at this time BladeSystem Matrix is the only solution that is capable of managing both physical and virtual servers with the same tools and allow movement between physical and virtual resources. Something that Matrix offers that others do not is integrated automated Disaster Recovery. Lastly Matrix supports both VMware and Microsoft Hyper-V, as well as Integrity Blades, for virtualization.

VT: What SAN protocols are supported today?
HP
: As of today BladeSystem Matrix supports Fiber Channel as the preferred method of connecting to storage. In addition, Matrix does support FCOE and iSCSI for VM data stores.

VT: What is storage provisioning manager?
HP
: This was explained as enhanced volume provisioning management, allowing more proactive maintenance of the pools of storage available for provisioning of an environment. Where this seem to tie for me was using it to publish or tag which volumes are available for provisioning. For example you could label a volume as boot disk and others as data disks. Then when creating your templates for provisioning the system will know which volumes are available for boot, as well as which volumes are available as data volumes during OS install, so that you provide better management of the storage you’ll utilize during provisioning.

VT: How many customers or units sold so far?
HP
: I had to try but was only told that HP does not release any numbers or revenues for products. BladeSystem Matrix is made up of components that have been offered for many years by HP, and includes multi-million unit sales of components such as BladeSystem servers and Virtual Connect.

VT: How will software and firmware updates be handled?
HP
: There are update bundles that are created for BladeSystem Matrix. At this time these updates must be performed by an HP Services person. These updates can be done in person or remotely.

VT: How does the SAN fabric interact with BladeSystem Matrix?
HP
: In the current version of Matrix you will need to pre-create volumes and your server zoning ahead of any provisioning.

VT: What is Insight Virtualization Manager?
HP
: Also known as VSE Virtualization Manager that is part of Insight Dynamics. With VSE you can move a logical server from the existing blade it’s running on to another blade. The VSE application will move the server profile to the new blade and restart the server once the move is complete and your operating system will start up. The VSE interface will offer recommendations for target blades that match your requirements. There are a few reasons for such a move that would include upgrades and maintenance. Video demo of moving a blade server to another blade. Video Link

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 212
%d bloggers like this: