What type of storage to use for VMware Mirage

Surprisingly Mirage has some fairly lean requirements for the management infrastructure. One of the key pieces is the storage, after all we are storing is our master image(s) and the backups from each endpoint. So Mirage is a fairly storage focused product.

A nice thing about VMware Mirage is that it does not require blazing fast storage to provide its services. The disks that will store the CVD backups are located on the single-instance-store (SIS) and can be located on Tier 2 or Tier 3 disks. This will allow your design to utilize larger capacity drives keeping the cost of deploying low.

One scenario that I thought of that you might want to use Tier 2 disks for would be a Windows migration project. For example if you are upgrading endpoints from Windows XP to Windows 7 you will keep pushing out larger amounts of data than your current steady state. So for this you could store the base layer for this process on better performing disk. Another reason is under a project like this you are likely to be pushing this out to larger quantities of endpoints during each wave. Once the migration is complete an Administrator can move the base layer to a lower tier of storage.

Lab Test:

I performed a quick test in my home lab. I asked a test endpoint to re-sync with the Mirage server and watched the IO activity on the Mirage server VM via ESXTOP. The Mirage server was running on a HP ML150 server with a local SSD drive. The test endpoint was running on same host located on a home office NFS storage device connected via 1GbE. The average IO during the sync on the Mirage server was 3-5 IOPs with a short spike to 30 IOPs. The endpoint moved between 30-100 IOPs. This was not a full push of an image, it was just an incremental sync of changes. This was just a simple test in a home to get an idea of what the workload might be for each sync.

You can deploy a standalone Mirage server or a Mirage server cluster. I have listed the storage options that both deployment methods support. These are the methods for connecting to storage not the performance of the storage.

Standalone Mirage Server:

  • Direct Attached Storage (DAS).
  • Storage Area Network (SAN) connected through iSCSI or Fiber Channel (FC).
  • Network Attached Storage (NAS) connected through iSCSI, Fiber Channel (FC), or CIFS network share.

Mirage Server Cluster:

  • Network Attached Storage (NAS) connected using a CIFS network share

Interested in other VMware Mirage topics refer to my Mirage Series.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

How to use vSphere Image Builder to create a custom ISO

With the release of vSphere 5 VMware included the ability to roll your own ISO files to include 3rd party drives and CIM providers. This is handy if you have a build that requires drivers that are not available in the base VMware ISO’s or your manufacturers custom ISO’s.

So why might you need this information. Well I know this task is on the study list for many people, I’ve also been asked by some team members over time about how to make a custom ISO with drivers added in.

In this how to guide I’m using the downloadable offline bundle that you can grab from VMware. It is possible to grab the packages with an HTTP address also. But I like to have it local so I can use when needed.

Step 1:

Open PowerCLI window and type the following command.  This will load up the offline bundle that we want to use. I have already place the ESXi offline bundle in a folder called Image on c:

Add-EsxSoftwareDepot c:imagename_of_file.zip

 add depot

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

Building new whitebox servers for VMware home lab

I have needed to get some more capacity added to the home lab for a while now, but have taken my time. I have been gathering up enterprise servers that are a couple of generations old in the past. These have always done me well but have limited amount of memory in them and upgrading them was pretty expensive, not to mention they are very loud. So I decided to go another direction and build a couple of whitebox servers based on common desktop parts. I’ve been watching for sales and collecting the parts to build them. After finding a couple of good deals lately I finally had all the parts need to build two hosts.

Another thing that I had to make a decision on was if I needed a server class motherboard or would a desktop one work. After thinking about it I came to the decision that a desktop motherboard would work just fine and probably save me a few dollars in the build cost. I almost never use the out of band management access to the enterprise servers that I had at this point and since they are just down in the basement, I can easily run down and access them if needed.

I also did not need the ability to use VT-d so a server board was even less important. I simple needed hosts with good power and more RAM. It really comes down to memory for me, I needed the ability to run more VMs so that I don’t have to turn things on and off.

The Why:

This type of lab is important to me for personal learning and testing out configurations for the customer designs that I work on during the days. I have access to a sweet lab at work but it’s just better to have your own lab that you are free to do what you want, and my poor bandwidth at the house makes remote access kind of poor.

I want the ability to run a View environment, vCloud suite and my various other tools all at once. With these new hosts I will be able to dedicate one of my older servers as the management host and a pair of the older servers as hosts for VMware View. This will leave the two new hosts to run vCloud suite and other tools on.

The How:

I have set the hosts up to boot from the USB sticks and plan to use part of the 60GB SSD drives for host cache. The remaining disk space will be used for VMs. Each host will have 32GB of RAM, this is the max that the motherboard will support with its 4 slots. There is an onboard 1GB network connection that is a Realtek 8111E according to the specs. I can report that after loading vSphere 5.1 the network card was recognized and worked without issue. I had a couple of gigabit network cards laying around that I installed for a second connection in each of the hosts.

The case came with a fan included, but I added another for better cooling and air flow. Even with multiple fans running the hosts are very quiet since there are no spinning disks in them and put out very little heat. I could have probably reduced the noise and heat a bit more by choosing a fan less power supply but they are over $100 and was not a priority for me.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

New storage features in vSphere 5.1

With the announcement of vSphere 5.1 at VMworld yesterday some more detailed information is becoming available around the new storage features included in vSphere 5.1.There are some really great stuff on this list that has not gotten much notice yet. I really like that many of these features are now supported in both View and vCloud director.

  1. VMFS file sharing limits – The previous limit for number of hosts that could share a single file was 8 for block storage, in 5.0 it was raised to 32 hosts for NFS datastores. In vSphere 5.1 the limit for block storage or VMFS datastore has been raised to 32 hosts also. This is compatible with both VMware View and vCloud director, the primary use for this is linked clones.
  2. VAAI updates – The use of VAAI NAS based snapshots for vCloud director is now available, this was previously available to only View in vSphere 5.0. This allows hardware based snapshots for faster provisioning of linked clones in both products.
  3. Larger MSCS clusters – The previous limit of 2 node MSCS clusters has been raised to allow for up to 5 node MSCS clusters with vSphere 5.1
  4. All Paths Down update – The timing out I/O on devices that enter into an APD state has been updated to address hostd from being tied up.
  5. Storage Protocol enhancements – The ability to boot from software FCOE was added to vSphere 5.1.  Jumbo frame support has been added for all iSCSI adapters with UI support. Full support for 16Gb Fibre Channel HBA adapters that can run at full speed. In vSphere 5.0 16GB adapters could be used but had to run at 8Gb speeds.
  6. Storage IO control (SIOC) updates – SIOC will not figure out the best latency setting for your datastore as opposed to the manual setting in vSphere 5.0. By default SIOC is not turned in a Stats only mode so that it wont take any action but will be collecting stats for your before you configure  settings.
  7. Storage DRS (SDRS) enhancements – vCloud director can now use SDRS for initial placement of Fast provisioning linked clones for managing free space and IO utilization. This is an update from the previous free space method that vCloud 1.5 used and had no support for SDRS.
  8. Storage vMotion enhancements – Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation.

 

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More

VMware now includes vSphere Replication for free

Tucked in the back pocket with all the other big announcements today at VMworld is the availability of vSphere Replication as a standalone option. It is no longer tied to SRM 5.0 as it was originally made available. This is probably to line up with the FUD that Microsoft has been pushing with their Hyper-V replica being free.

What this means is that small shops will have a way to protect their virtual servers on a per VM basis rather than on a entire datastore/LUN basis. This does not offer in of the fancy logic and auto recovery options that SRM gives you but if you need some base level replication and you don’t have any array based options. Then this should make you pretty happy. Bellow is the announcement from VMware.

vSphere Replication was introduced with SRM 5.0 as a means of protecting VM data using our in-hypervisor software based replication.  It was part of SRM 5.0, and continues to be, carrying forward, but now we are offering the ability to use this technology in a new fashion.

Today’s announcement about vSphere Replication is a big one:  We have decoupled it from SRM and released it as an available feature of every vSphere license from Essentials Plus through Enterprise Plus.

Every customer can now protect their environment, using vSphere Replication as a fundamental feature of the protection of your environment, just like HA.

VR does not include all the orchestration, testing, reporting and enterprise-class DR functions of SRM, but allows for individual VM protection and recovery within or across clusters.  For many customers this type of protection is critical and has been difficult to attain short of buying into a full multisite DR solution with SRM.  Now most of our customers can take advantage of virtual machine protection and recovery with vSphere Replication.

Check out an introduction to vSphere Replication at http://www.vmware.com/files/pdf/techpaper/Introduction-to-vSphere-Replication.pdf

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

Read More
Page 3 of 1512345...10...Last »