How to install vCloud Hybrid Service (vCHS) plugin for web client

Last week VMware announced the availability of a web client plugin to help manage vCloud Hybrid Service (vCHS). This sounded pretty cool, but what really interested me was getting to try a plugin for the vSphere web client. There are very few plugins available for the web client at this point, as vendors are working on updating their existing plugins to work in the new world. Plugin documentation link.

For this write up I setup a test instance of the vCenter server appliance and will be using an existing vCHS account. I will install the plugin and connect it to my account.

1 register plugin

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

How to create a VM in VMware vCHS – vCloud Hybrid Service

I will now take you through the process of deploying a new VM from the global catalog in vCHS. This process is very much like it was in vCloud Director if you have done that before. But the new portal in vCHS makes the process a bit easier to understand for a common user.

The first step in the process is we must choose which Virtual Datacenter we want to deploy the VM into. The image below shows the options available, giving me the four available Datacenters that my account has access to.

new-vm1

Step 1

 

Next up is to select which selection we want to deploy from the VMware Catalog. This is the global catalog that VMware publishes for vCHS users. This catalog will have a small number of operating system images that will be kept up to date with patches. To keep things simple lets choose a 32 Bit CentOS 6.4 VM to deploy.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Chargeback reporting in VMware vCac

A reader asked a great question in a comment about what was the ability to look at charges for running VMs in vCac. So I thought that would make a great blog post, after some lab time this is what I was able to put together.

The Chargeback reporting that is built into vCac is very easy to use but somewhat limited. There are just 3 reports available for Chargeback reporting. These can give you a good idea of what is going on in your VMware cloud but as you grow and mature your cloud you might require more detailed reporting. This is where I hope VMware merges the vCac reporting with the Chargeback abilities that are now included in the vC Ops Suite. With the combined power you would have a powerful tool.

The first report shown below in Image 1 is a Chargeback report grouped by reservations. This is grouped up and sorted by provisioning groups (PG). In vCac a PG is probably closest to what an Organization is in vCloud Director. The numbers in these are just something we tossed in for examples so that we can get some data back. There was not much thought placed on figuring out any real costs.

Image 1

Image 1

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Building new whitebox servers for VMware home lab

I have needed to get some more capacity added to the home lab for a while now, but have taken my time. I have been gathering up enterprise servers that are a couple of generations old in the past. These have always done me well but have limited amount of memory in them and upgrading them was pretty expensive, not to mention they are very loud. So I decided to go another direction and build a couple of whitebox servers based on common desktop parts. I’ve been watching for sales and collecting the parts to build them. After finding a couple of good deals lately I finally had all the parts need to build two hosts.

Another thing that I had to make a decision on was if I needed a server class motherboard or would a desktop one work. After thinking about it I came to the decision that a desktop motherboard would work just fine and probably save me a few dollars in the build cost. I almost never use the out of band management access to the enterprise servers that I had at this point and since they are just down in the basement, I can easily run down and access them if needed.

I also did not need the ability to use VT-d so a server board was even less important. I simple needed hosts with good power and more RAM. It really comes down to memory for me, I needed the ability to run more VMs so that I don’t have to turn things on and off.

The Why:

This type of lab is important to me for personal learning and testing out configurations for the customer designs that I work on during the days. I have access to a sweet lab at work but it’s just better to have your own lab that you are free to do what you want, and my poor bandwidth at the house makes remote access kind of poor.

I want the ability to run a View environment, vCloud suite and my various other tools all at once. With these new hosts I will be able to dedicate one of my older servers as the management host and a pair of the older servers as hosts for VMware View. This will leave the two new hosts to run vCloud suite and other tools on.

The How:

I have set the hosts up to boot from the USB sticks and plan to use part of the 60GB SSD drives for host cache. The remaining disk space will be used for VMs. Each host will have 32GB of RAM, this is the max that the motherboard will support with its 4 slots. There is an onboard 1GB network connection that is a Realtek 8111E according to the specs. I can report that after loading vSphere 5.1 the network card was recognized and worked without issue. I had a couple of gigabit network cards laying around that I installed for a second connection in each of the hosts.

The case came with a fan included, but I added another for better cooling and air flow. Even with multiple fans running the hosts are very quiet since there are no spinning disks in them and put out very little heat. I could have probably reduced the noise and heat a bit more by choosing a fan less power supply but they are over $100 and was not a priority for me.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

New storage features in vSphere 5.1

With the announcement of vSphere 5.1 at VMworld yesterday some more detailed information is becoming available around the new storage features included in vSphere 5.1.There are some really great stuff on this list that has not gotten much notice yet. I really like that many of these features are now supported in both View and vCloud director.

  1. VMFS file sharing limits – The previous limit for number of hosts that could share a single file was 8 for block storage, in 5.0 it was raised to 32 hosts for NFS datastores. In vSphere 5.1 the limit for block storage or VMFS datastore has been raised to 32 hosts also. This is compatible with both VMware View and vCloud director, the primary use for this is linked clones.
  2. VAAI updates – The use of VAAI NAS based snapshots for vCloud director is now available, this was previously available to only View in vSphere 5.0. This allows hardware based snapshots for faster provisioning of linked clones in both products.
  3. Larger MSCS clusters – The previous limit of 2 node MSCS clusters has been raised to allow for up to 5 node MSCS clusters with vSphere 5.1
  4. All Paths Down update – The timing out I/O on devices that enter into an APD state has been updated to address hostd from being tied up.
  5. Storage Protocol enhancements РThe ability to boot from software FCOE was added to vSphere 5.1.  Jumbo frame support has been added for all iSCSI adapters with UI support. Full support for 16Gb Fibre Channel HBA adapters that can run at full speed. In vSphere 5.0 16GB adapters could be used but had to run at 8Gb speeds.
  6. Storage IO control (SIOC) updates РSIOC will not figure out the best latency setting for your datastore as opposed to the manual setting in vSphere 5.0. By default SIOC is not turned in a Stats only mode so that it wont take any action but will be collecting stats for your before you configure  settings.
  7. Storage DRS (SDRS) enhancements – vCloud director can now use SDRS for initial placement of Fast provisioning linked clones for managing free space and IO utilization. This is an update from the previous free space method that vCloud 1.5 used and had no support for SDRS.
  8. Storage vMotion enhancements – Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 1 of 3123
%d bloggers like this: