Say hello to vSOM or vSphere with Operations Management

Continuing on its recent path of creating Software Suites out of its popular products, VMware has announced the creations of vSOM. vSOM is the marriage of vSphere and vCenter Operations Management Standard edition. I think these seems like a pretty good idea, it will allow customers to purchase vC Ops and license it per socket much like in the vCloud Suite. But vSOM will allow customers not ready for vCloud to take advantage of Suite pricing.

Pricing and Availability
VMware vSphere with Operations Management is expected to be available in Q1 2013, and will be offered in three editions: Standard, Enterprise and Enterprise Plus.  Pricing starts at $1,745 per processor with no core, vRAM or number of VM limits. Go to VMware vSphere Pricing/Buy page. For a limited time, existing VMware vSphere edition customers will be able to upgrade to VMware vSphere with Operations Management editions or Acceleration Kits at 15 percent off the list price. New customers will be able to purchase VMware vSphere with Operations Management Acceleration Kits at 15 percent off the list price.

Based on the above paragraph it seems that VMware will offer three versions of vSOM. They seem to mirror the 3 upper versions of vSphere being sold. What is not clear at this point is if the version of vC Ops also follows the same pattern or will vC Ops Standard be part of each level of vSOM with only the vSphere level changing.

You can review the versions of vC Ops available and their features here.

VSOM

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

What type of storage to use for VMware Mirage

Surprisingly Mirage has some fairly lean requirements for the management infrastructure. One of the key pieces is the storage, after all we are storing is our master image(s) and the backups from each endpoint. So Mirage is a fairly storage focused product.

A nice thing about VMware Mirage is that it does not require blazing fast storage to provide its services. The disks that will store the CVD backups are located on the single-instance-store (SIS) and can be located on Tier 2 or Tier 3 disks. This will allow your design to utilize larger capacity drives keeping the cost of deploying low.

One scenario that I thought of that you might want to use Tier 2 disks for would be a Windows migration project. For example if you are upgrading endpoints from Windows XP to Windows 7 you will keep pushing out larger amounts of data than your current steady state. So for this you could store the base layer for this process on better performing disk. Another reason is under a project like this you are likely to be pushing this out to larger quantities of endpoints during each wave. Once the migration is complete an Administrator can move the base layer to a lower tier of storage.

Lab Test:

I performed a quick test in my home lab. I asked a test endpoint to re-sync with the Mirage server and watched the IO activity on the Mirage server VM via ESXTOP. The Mirage server was running on a HP ML150 server with a local SSD drive. The test endpoint was running on same host located on a home office NFS storage device connected via 1GbE. The average IO during the sync on the Mirage server was 3-5 IOPs with a short spike to 30 IOPs. The endpoint moved between 30-100 IOPs. This was not a full push of an image, it was just an incremental sync of changes. This was just a simple test in a home to get an idea of what the workload might be for each sync.

You can deploy a standalone Mirage server or a Mirage server cluster. I have listed the storage options that both deployment methods support. These are the methods for connecting to storage not the performance of the storage.

Standalone Mirage Server:

  • Direct Attached Storage (DAS).
  • Storage Area Network (SAN) connected through iSCSI or Fiber Channel (FC).
  • Network Attached Storage (NAS) connected through iSCSI, Fiber Channel (FC), or CIFS network share.

Mirage Server Cluster:

  • Network Attached Storage (NAS) connected using a CIFS network share

Interested in other VMware Mirage topics refer to my Mirage Series.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

How to use vSphere Image Builder to create a custom ISO

With the release of vSphere 5 VMware included the ability to roll your own ISO files to include 3rd party drives and CIM providers. This is handy if you have a build that requires drivers that are not available in the base VMware ISO’s or your manufacturers custom ISO’s.

So why might you need this information. Well I know this task is on the study list for many people, I’ve also been asked by some team members over time about how to make a custom ISO with drivers added in.

In this how to guide I’m using the downloadable offline bundle that you can grab from VMware. It is possible to grab the packages with an HTTP address also. But I like to have it local so I can use when needed.

Step 1:

Open PowerCLI window and type the following command.  This will load up the offline bundle that we want to use. I have already place the ESXi offline bundle in a folder called Image on c:

Add-EsxSoftwareDepot c:imagename_of_file.zip

 add depot

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Building new whitebox servers for VMware home lab

I have needed to get some more capacity added to the home lab for a while now, but have taken my time. I have been gathering up enterprise servers that are a couple of generations old in the past. These have always done me well but have limited amount of memory in them and upgrading them was pretty expensive, not to mention they are very loud. So I decided to go another direction and build a couple of whitebox servers based on common desktop parts. I’ve been watching for sales and collecting the parts to build them. After finding a couple of good deals lately I finally had all the parts need to build two hosts.

Another thing that I had to make a decision on was if I needed a server class motherboard or would a desktop one work. After thinking about it I came to the decision that a desktop motherboard would work just fine and probably save me a few dollars in the build cost. I almost never use the out of band management access to the enterprise servers that I had at this point and since they are just down in the basement, I can easily run down and access them if needed.

I also did not need the ability to use VT-d so a server board was even less important. I simple needed hosts with good power and more RAM. It really comes down to memory for me, I needed the ability to run more VMs so that I don’t have to turn things on and off.

The Why:

This type of lab is important to me for personal learning and testing out configurations for the customer designs that I work on during the days. I have access to a sweet lab at work but it’s just better to have your own lab that you are free to do what you want, and my poor bandwidth at the house makes remote access kind of poor.

I want the ability to run a View environment, vCloud suite and my various other tools all at once. With these new hosts I will be able to dedicate one of my older servers as the management host and a pair of the older servers as hosts for VMware View. This will leave the two new hosts to run vCloud suite and other tools on.

The How:

I have set the hosts up to boot from the USB sticks and plan to use part of the 60GB SSD drives for host cache. The remaining disk space will be used for VMs. Each host will have 32GB of RAM, this is the max that the motherboard will support with its 4 slots. There is an onboard 1GB network connection that is a Realtek 8111E according to the specs. I can report that after loading vSphere 5.1 the network card was recognized and worked without issue. I had a couple of gigabit network cards laying around that I installed for a second connection in each of the hosts.

The case came with a fan included, but I added another for better cooling and air flow. Even with multiple fans running the hosts are very quiet since there are no spinning disks in them and put out very little heat. I could have probably reduced the noise and heat a bit more by choosing a fan less power supply but they are over $100 and was not a priority for me.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Why Tintri is a great storage choice for vCloud director

Some of the characteristics of a cloud make it a great solution for customers but introduces some new challenges on the management side. Much like back in the virtualization days the hypervisor abstracted the hardware from the virtual servers, cloud is now abstracting the hypervisor from the tenants of the cloud. This as of today is by design and creating an environment that is easy to use also removes many of the management or provisioning choices from the users.

But by doing this you are also limiting some of the features or performance decisions that users may need or demand out of a cloud design. I think today that the features are not yet there within most cloud platforms to help users make performance related decisions easily or be able to create complex custom virtual machines. Let’s talk through a few of these items below and why I think Tintri is best positioned to solve many of these items today.

 

Complex Cloud VM’s

So what do I mean by a complex cloud VM. The best example that I can give for this scenario is a database server. The typical config that follows best practices is to separate the various parts of the database server onto separate virtual disks that are preferably on discrete LUNs that meet the performance characteristic of each part of the database. This would mean that the OS drive, database drive, temp DB drive and logs drive would all be on individual virtual disks and would be matched up with a datastore backed by the recommended Raid level. Along with selecting the right raid level would be to consider the number and type of disks used in these disks.

Today vCloud and I would say most other cloud stacks do not fully account for the performance requirements for these types of complex workloads when being provisioned from a catalog or at all. There will probably be some people that will say this is possible in vCloud but not without manual intervention from an admin in vSphere. My point is this process should be hidden from the cloud consumer and happen when they self-provision a VM.

The approach that many typical arrays take would be to provision Tiered storage and present it as an Org Virtual Data Centers (vDC) in vCloud. Then force the user to select a tier which to place the entire VM on when provisioning, and this is certainly a viable option. You could use different types of storage like an all Flash based array or something that auto tiers storage blocks on a scheduled basis to try and create pools of storage. While these are valid options I think they add extra complexity to things and not a very granular approach, meaning you usually have to make performance based decisions on an entire LUN or pool of storage.

Now you are probably wondering how is Tintri going to make this any simpler for me. I will not give you the entire Tintri value pitch as that’s not my job, you can review their features here. In a vCloud design specifically, Tintri solves this complex VM issue by offering you a storage appliance that is VM-aware. First Tintri uses it’s finally tuned methods to serve nearly all of your IO (both reads and writes) from its flash storage. This is very helpful because you will have clear visibility into which virtual machines are using higher amounts of storage IO even down to the individual virtual disk. And the icing on the cake is that since Tintri is completely VM-aware, if you needed to guarantee the highest performance for a specific VM or virtual disk you have the ability as an admin to pin the VM or disk to flash, thus guaranteeing the best performance for the virtual machine. I received a reply from Tintri that explains while possible to manually pin VMs to flash, under normal use this is not need as their sub VM QoS along with the flash hit percentage they can provide all workloads the throughput, IOPS and low latency they need. Today this type of granularity is not possible from other storage vendors.

So I tried to find a graphic that would visualize the awesome NFS performance that Tintri can provide, but when you search for NFS the results are heavily polluted with Need for Speed images. So who does not love a good burnout picture?

Why NFS is just a solid choice for vCloud

OK so it’s not like Tintri is the only NFS player in the market. But by choosing NFS as their storage protocol I think that they match up well with creating clean vCloud designs. Much with the discussion above about creating Tiers of storage for performance you will also most likely be creating multiple datastores if you are using block based storage. The need for multiple datastores could be driven by decisions and factors such as multiple tenants in your cloud and limiting factors such as size or number of VMs per datastore.

NFS allows you to be more flexible by using fewer larger exports or in Tintri’s case a single NFS datastore per Tintri appliance. This reduces waste by not having to cut up your usable space into so many smaller parts for various design decisions. Now there are sure to be some valid reasons that you would need to have those divisions and Tintri cannot solve everything. But for many of the clouds being built today by corporations that are mostly private in nature tend to benefit from what Tintri could offer to their design.

Oh and not to forget the benefit that NFS by default is thin provisioned and saves space on your storage without having to use Fast Provisioning in vCloud that does not fit well with all designs.

To wrap all this up you should have these types of conversations with your VMware partner, vendor or internal teams to evaluate what might be the best solution based on the requirements of your cloud design. But no matter what your needs are, I think that you should take a serious look at what Tintri might be able to help you solve in the areas of performance and ease of management.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 2 of 612345...Last »