How to upgrade to VMFS 5 on VMware and VMFS 5 facts

I wrote this last year but never published, working on clearing out some old posts. Along with the long list of other features added to vSphere 5, VMware has included a new version of VMFS. The upgrade in VMFS brings us to version 5 of the file system.

The main focus of VMware while creating VMFS seems to be making it easier to manage storage in virtual environments. In VMFS-5 the number of storage related objects that need to be managed by an VMware administrator are far less. For example you can now use larger and fewer datastores, because the scaling limits of VMFS-5 have been increased.

 

VMFS-5 New Features

  • Unified 1MB File Block Size. Past versions of VMFS used 1,2,4 or 8MB file blocks. The larger block sizes would allow you to create files larger than 256GB. There is now just one block size in VMFS-5 allowing you to create up to 2TB VMDK files using the 1MB file blocks.
  • Larger Datastores. In previous versions of VMFS, the largest datastore size without extents was 2TB-512 bytes. With VMFS-5 this limit has been increased to 64TB.
  • Smaller Sub-Block. VMFS-5 introduces a smaller sub-block. The new size is now 8KB rather than the old 64KB size from previous versions. Now a small file less than 8KB but larger than 1KB in size will only consume 8KB rather than 64KB. This will reduce the disk space being consumed by these small files.
  • Small File Support. Support for files less than or equal to 1KB, now use the file descriptor location in the metadata for storage rather than file blocks. If they grow above 1KB, these files will then start to use the new 8KB sub blocks. The net result is a reduction in space consumed by small files.
  • Increased File Count. VMFS-5 now allows support for more than 100,000 files. In VMFS-3 the limit was 30,000 files.
  • ATS Enhancement. ATS is now used all through VMFS-5 for file locking. ATS (Atomic Test & Set) is a Hardware Acceleration primitive, and is part of the VAAI (vSphere Storage APIs for Array Integration). This improves the file locking performance over previous versions of VMFS.
Read More

How to backup ESX and ESXi host configurations

When it comes to protecting your virtual environment there are many things to consider. You need to have backups of your virtual machines and don’t forget about your host configurations.

How to back up your ESXi configuration

There are many reasons that you would want to back up your ESXi configuration, of which the two main ones would be before upgrading to a new versions or for DR reasons.

If you are going to be upgrading an existing ESXi host to ESXi 5 you should backup your host configuration before proceeding. With vSphere 5 upgrades there is no option to roll back like there was with vSphere 4 upgrades. This means that a failed upgrade would require you to install ESXi 4.x and restore the configuration.

To backup an ESXi host you will need the vCLI installed on a server or you can also use the vMA.

# vicfg-cfgbackup –server ESXi_host_ip –-username username –-password password –-s backup_filename

 

How to restore your ESXi configuration

Another really nice thing about ESXi is that it’s just as easy to restore your backed up configuration as it was to grab the backup. Simple install a clean version of ESXi matching the version that the backup was taken from. Connect to the host using vCLI or your vMA appliance as issue the restore command shown below.

# vicfg-cfgbackup –server ESXi_host_ip –-username username –-password password –-r backup_filename

How to back up your ESX configuration

There is not one command to back up an ESX hosts configuration unfortunately.

To accomplish this you will need to back up the following items in a manual fashion.

  • Back up local VMFS files system – templates, VMs * .iso files
  • Back up any custom scripts
  • Back up your .vmx files
  • Back up the files in /etc/passwd, /etc/groups, /etc/shadow and /etc/gshadow directories. The /etc/shadow and /etc/gshadow files might not be present on all installations.

 

How to restore your ESX configuration

If you need to roll back from a failed upgrade or recover from a disaster and need to restore your host follow this short process. First you will need to install ESX 4.x the version level that you were running at the time you backed up your files.

Once you have ESX 4.x installed and running at its previous level you can now restore the files you backed up earlier. This can be done many ways but a couple of simple ways would be to use winSCP or Veeam FastSCP, both are free and easy to use.

Read More

How does FreeNAS perform with VMware vSphere

I was talking to a co-worker who was kicking around the idea of using FreeBSD and ZFS for shared storage in his home lab. I thought it sounded decent but never imagined that it could squeeze a ton of IOPs out of some old hardware. So too make my life easier since I’m no Linux geek I elected to run FreeNAS 8 which is the same setup, but wrapped up in a nice package with a web GUI. Perfect for a former Windows geek.

Now I never really had very high hopes on getting much performance out of the test server that I would be using. And after eating some Chinese takeout, you can see the fortune that I got was telling me not to get my hopes up.

So I dug up one of my old servers that I had since retired after vSphere 5 was released. It is a 64bit machine but does not have VT compatible CPU’s so it does not offer much help as a vSphere  host any longer. But it was the best candidate for this storage experiment. The server is an IBM x346 server with 2 Dual Core Xeon CPU’s and 6GB of RAM. I am using just one of the onboard 1Gbe connections for Ethernet.  For disks I have four 72GB U320 10K drives, of which one is used for the OS install and the other three will be used for a ZFS volume. Yes that’s right I am going to use just 3 x 10K SCSI drives for this NAS box. I know what you are probably thinking. A picture of this awesome 6+ year old machine is shown below.

Read More

A list of some vCloud Director best practices

These are some of the best practices that I have come across in my workings with vCloud Director. Some are from VMware, other bloggers and from my experiences. If it makes sense I will add some editorial to them so that it’s just not a generic statement that might not be clear to everyone.

I’ve broken them up into a few sections. Best practices which are design covenants and processes to follow. Helpful tips are items that can make your life easier or help with performance, and things to avoid are simply that. I will continue to add to these list as new items come up. If you have any suggestions drop me a note or leave a comment.

Best Practices:

  • Connect a single provider datacenter with a vSphere cluster when possible rather than a resource pool. Using resource pools that further divide up the resources of clusters provides and extra layer of management for admins and increases your risk of affecting performance if settings are not correct for a particular resource pool.
  • Create a separate management cluster for the vCenter that controls resources clusters for vCloud, and other infrastructure services used to support vCloud.
    – Management components are separate from the resources they are managing
    – The overhead for cloud consumer resources is minimized. Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups do not host vCenter VMs
    – Resources are dedicated to the cloud. Resources are consistently and transparently managed and carved up and scaled horizontally
    – Troubleshooting and problem resolution are quicker as management components are strictly contained in a relatively small and manageable management cluster
  • Create an organization with a pay per use vDC to store your global catalog vApp templates in. This will not consume any resources from the cluster because the vApps are never powered on.
  • To determine the number of vCloud director cells need use the following formula.  ( number of cell instances = n+1 where n is the number of vCenter server instances ) If your vCenter servers are small meaning less than 2000 VMs you can have a single vCloud cell manage several vCenter servers. ( number of cell instances = n/3000 + 1 where n is the number of expected powered on VMs)
  • Do not mix tiers of storage within a single provider datacenter

 

Helpful Tips:

  • If you have VMs that are do not generate High I/O-you can consider using Fast Provisioning (linked clones) to save on storage space and faster provisioning
  • Make sure to size your NFS volume attached to vCloud director cells large enough for concurrent events that might take place in your design. Refer to great post by Chris Colotti about when cells would use NFS.
  • When you crate a Pay as you go vDC you will be asked to set the default  vCPU speed for new vApps being provisioned. By default its a very low amount (.26Ghz), you will need to adjust for you environment. There are two great blog posts here and here about this topic.
  • vCloud limits linked clone length to 30 and performance can be affected as VMs hit this limit. If you are looking to find out the lengths of your linked clone chains William Lam wrote a script to do just this.
  • You can view the chain length of a specific VM by looking at its properties within vCloud dashboard.
  • Be mindful when choosing the reservation levels for CPU and Memory when creating Org vDCs. You may think going with a lower percentage of commitment to allow you to over provision is a OK strategy. But these reservation values are very pivotal when calculating the values that HA admission control uses for being able to restart VMs after a host failure. If you commit too low you may not be able to restart all the VMs in your vDCs. If you need more details about admission control I suggest reading Duncan’s post on HA.

 

Things to Avoid:

  •  If using fast provisioning (linked clones) on a VMFS datastore limit cluster nodes to eight or less.
  • Be aware that there is a chance to hit the snapshot chain length limit. If the current clone has become very slow compared to the prior clone, the clone may have hit the snapshot chain length limit 30. This can be resolved by virtual machine consolation.
  • When adding an existing vSphere cluster as a Provider vDC in vCloud beware that when VCD goes to prep the hosts and install the agents it wants to prepare all the hosts at once, rather then stagger them.  You might try to use a bad password. Then after the failure, go to the hosts list and prepare them one at a time.

 

Read More

Strange network behavior on VM imported into VMware vCloud Director

While doing some lab work for a vCloud private cloud design, I noticed a strange behavior  on virtual machines that are imported into vCloud from vCenter. Not something that I had noticed in past for some reason, but it really struck me as odd for the tests that I was trying to work on.

What I found was that when importing a VM from vCenter into one of the organizations in my lab cloud, was that the vNic on the VM was still attached to the vSphere port group. Now this really should not be possible as vCloud is suppose to abstract this from your view in the cloud. How I came to notice it was when I tried to set it to obtain an IP from the pool that was configured for the external network attached to my Org network that I thought I was using. It kept returning the error that no IPs were available in the pool, which I knew was wrong because I had checked it several times. What tripped me up was I used a similar name for my vCloud network as my port group so it did not sink in right away.

What the error was really trying to explain was that the vSphere port group on the host did not have an IP pool configured and could not give the VM an IP. This all struck me as very weird because I expected vCloud to assign the vNic to a valid network within my Org much like it handles the placement of VMDK files for storage when importing.

So the fix was to edit the virtual machine within vCloud and under the network drop down I was able to add in an Org network that already existed. Which was more confusing than it needed to be, because it makes you think you are adding a new network rather then just attaching one. I have included an example below.

Example:

The image below shows the network settings shown when editing the VM. I added in the Public-org network from vCloud afterwards, you can also see the different icons next to the networks listed.

In the next image I am showing the external networks that are setup within vCD. These are mapped directly to vSphere port groups shown in the far right column. The red box is around the “VM Network” port group that exists in vSphere but was clearly still attached to the imported VM.

 

 

 

Read More

vSphere ESXi 5 upgrade or install how to steps

This something that I wrote last year during the vSphere 5.0 beta and I had intended on using it with another project. After holding it for a longtime I finally decided to publish it here. There will be some other related content coming soon.

With the release of vSphere 5, VMware has entered the era of ESXi only hypervisors. This has been promised by VMware for the last couple of years, so it should be of no surprise to anyone. The ESXi platform has under gone a big coming of age journey since its first release. With each new version and update the ESXi platform has narrowed the feature gap that had previously existed with its brother ESX classic.

With this release VMware’s type 1 hypervisor has entered its fifth generation and in this book we are going to assume that you have a base level of experience. We will not be holding your hand showing each step of a base installation. We will be talking about topics that concern admins on important projects, daily tasks and showing you how to accomplish some of the new features in vSphere 5.

Upgrade considerations and dependencies

With any VMware related upgrade there are numerous items that should be considered when planning to move to the next release. Whether you’re going to be upgrading using existing hardware or purchasing new servers. You need to spend the time to examine the parts of your servers and validate they are supported by the release of vSphere that you plan on using. This can be done by using the VMware HCG or Hardware Compatibility Guide also commonly referred to as the HCL.

The release of vSphere 5 offers most of the same paths for upgrading, but also offers some not possible in the past. To make this easy to digest we have created Figure 1.0 that covers the upgrade paths and if they are possible with ESXi 5. Each of these methods will be expanded upon within the sections of this chapter.

Read More

Designing vSphere clusters to use as vCloud director provider vDCs

In talking with different customers I have been getting questions about how would clusters be sized and utilized in a vCloud environment. Before we get too far into the details I will give a brief overview on how capacity is presented to vCloud.

When setting up resources or capacity within vCloud you have to first create Provider Virtual Datacenters (vDC). These provider vDCs map back to either a single vSphere cluster or resource pool. This means that depending on your decision to scale out or up you will end with either large pools of capacity or more smaller ones. There are many reasons that you might do either option so I won’t go into that discussion.

At the end of the post I have described the three options for allocating resources to Organization vDCs. These provide you with different methods to assign, guarantee or over commit resources. With any of these you will need to decide how you will construct your HA clusters for vSphere that will support your cloud.

So you might be wondering what effects an HA cluster could have on your cloud. Well to give you a simple example depending on what options you use to build your HA cluster could allow tenants of the cloud to provision too many VMs and upon a failure all VMs might not be able to restart, or when provisioning a new vApp the VM would not be able to start because it failed admission control. These are just some of the things to think about along with what level of failure will you design for. Do you need to be able to sustain one host failure (N+1) or two host failures (N+2).

Example 1: You configure a vSphere cluster for HA and allow it to sustain up to 1 host failure or specify the percentage of cluster resources that it will reserve for spare capacity. Now both of these 2 methods do their best to guarantee there are resources available for fail over. Back in the server virtualization days you had an Admin team that managed this and the provisioning to make sure the limits are not violated. But in a vCloud deployment you will need to allocate these resources to vDCs. Depending on which allocation method you choose, you might be over committing resources or allocating 100% of them, see options at the end of this post. This could allow the cloud tenants to provision too many vApps and causing them to not start or not start after a host failure.

To avoid this issue you need to architect your clusters and vDCs paying close attention to your sizing and how you allocate the resources. Also when adding capacity to a cluster you will need to modify the settings on the vDC in vCloud to accommodate the capacity that was added to the cluster paying close attention to not get your calculations out of whack.

Example 2: This method is something that I have never been much of a fan of for straight server virtualization but sometimes use if for cloud designs. In this option a Failover host would be specified, this host would be waiting to take over for a host failure. In this design you would not be able to allocate these resources since they are not available for use. This is usually regarded as a waste when people hear that a host will be sitting there doing nothing. But it does accomplish what we are trying to do.

Now when creating your vDCs in vCloud you can allocate up to 100% of the resources since there is now dedicated failover capacity. I personally don’t like to allocate 100% even in this scenario, I like to use something in the 90-95% range which allows for a little bit of breathing room. You could also take the route of allocating less of a percentage and then using it to increase capacity as needed.

When you expand the cluster you still need to edit the vDC in vCloud to allocate the newly added capacity. But it will be a much more straightforward process. Because all you have to do is increase the resources to the level that you had previously chosen. For example we  had 4 hosts allocated at 100% and we added a 5th host, when we edit the vDC we will see the allocation down to about 80% of total resources. So all that is needed is to increase to 100% again. This makes the process of scaling a little more easy to understand.

 

The image below shows the different allocation models available in vCloud. Each offers different ways to allocate resources to Organizations but requires some different planning when making the decision. You can use a cluster to provide resources to one type of allocation method or mix and match them. But mixing allocation methods that are getting their resources from a single cluster requires you to pay close attention to how many resources are allocated to each vDC.

Pay as you go model:

With this model you can set the amount of CPU and Memory that you will guarantee to the each VM. This allows you to over commit resources if you desire. There are not any resources committed to an Organization until a vApp is deployed. The image below shows the available resources and what you are going to guarantee. Also gives an estimate of how many virtual machines could be created with these settings.

Allocation model

In this model you are allocating an amount of CPU and Memory resources to an Organization in vCloud, but you do not have to guarantee them anything. For example you could allocate them 5GHz of CPU but only guarantee 50% of that amount to them. This allows for overcommitment if desired.

Reservation model

In the last model you are reserving capacity to an Organization. Much like the allocation method you allocate CPU and Memory, but this model guarantees all the resources.

 

 

Read More
Page 20 of 51« First...10...1819202122...304050...Last »
%d bloggers like this: