Running vSphere 4.0 ESXi embedded Hypervisor on IBM x3690 servers

Posted by on April 18, 2011 in IBM Servers, VMware, vSphere | 8 comments

I’ve been working with a client lately on a datacenter move and they have selected IBM x3690 servers. The 3690’s will be the ESXi hosts for the new site and are running ESXi embedded. I have not had the opportunity to work with many different clients that choose the embedded route, so it was cool to see how IBM setup the servers.

The servers came with ESXi 4.0 installed on a USB stick from the factory and installed in one of the two internal USB ports that the server offers. Upon turning on the servers some of them booted right to VMware and some did not. After some further looking into the boot order in the BIOS I noticed that the Embedded Hypervisor option was not added to the boot order on a couple of the servers. A quick add and they were running just like the rest, guess someone at the factory missed that one.

The servers took a very long time to post and boot up, part of this was due to the 128 GB of RAM installed. We turned off some of the non-essentials and modified the boot order to go right to ESXi and cut the post time down some. You can see from the image below it’s just another x-series server.

I snapped the image below with the cover over showing off all the sticks of memory installed.

The last image below is a close up to the two USB ports that are internal to the server. The lower one as the USB stick from the factory with ESXi embedded on it.

About Brian Suhr

Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design

8 Comments

  1. Looks like you went with single processor units. I don’t see the memory tray and you have all 128GB populated in one deck. The dual proc boxes will have a memory tray that you split the memory on 96GB for one proc, 96GB for the other in support of NUMA. I like having the the 16 Cores in a 2U chassis personally.I too suffer from very long boot times, I’m not sure if its because of the EUFI or not, but it takes a good 10 minutes to get ESXi up and running from a cold boot.

    I have 3 of these running right now. I’ll be curious to see what happens when you try to update to 4.1 or 4.1U1 and if you run into the same issues that a lot of ESXi embedded users are having where key directories are blown out during the update, primarily the locker directory as well as the location of VMTools for the hosts. Also the ability to pull ESXi logs from the host will fail.

    I take it those are the QLogic CNA’s. We have had issues with hosts under load (25-35 VMs) and the cards simply failing without error, and by that I mean all connectivity to the LAN outside the ESXi host will fail but you will be green at the NIC and Switch but no errors will show. It’s a very strange issue. I’ve had a ticket open with IBM that is with Level 3 engineering and still unresolved.

    One other thing, you really can’t get LAN redundancy in these boxes if you go with the CNA’s for LAN. ESXi will only support 4 10GB NICs (thus two of the Qlogic CNA’s) or a mix of 2 10GB NICs and 8 1GB NICs. So all of your 10GB is locked into a single cardslot with the single point of failure being the ASIC on the card.

    The other two port CNA option on that model is the Emulex VFA which I still believe does not support ESXi 4.1, we had PSOD’s but cleared with vmware updated drivers, and but those drivers were not yet supported by IBM.

    Powerful machines, but they have been a bit of a headache for me these last few months.

  2. Hi Brian,
    What’s the size of the USB stick? 8, 16GB?
    FYI less than 5GB and ESXi will use 4GB of your physical memory for the scratch partition by default…
    The partition can be created on a remote VMFS or NFS volume though.

  3. Don’t know about Brian’s but my hosts came with 2GB chips.
    I was not aware that you needed more than 2GB, KB 1026500 says min is 1GB (though I wouldn’t recommend that). I setup VMA for logging outside the ESXi hosts but also you can set a datastore specific on your SAN.

    ~ # df -h
    Filesystem Size Used Available Use% Mounted on
    visorfs 1.6G 368.8M 1.2G 23% /

    • I don’t remember what size the chips are in those servers. Will have to look next time I’m working with them.

      Thanks for the great input on the article guys.

  4. FYI: If you are running the QLOGIC CNA 10GB adapters on ESXi 4.1 they are not supported per IBM. The Cards redbook says they are supported in ESXi 4.1 but the combination of the 10GB CNA and the 7148 (3690 x5) are not officially supported by IBM with ESXi 4.1

    • Thats good to know. Those cards are not CNA’s, rather just HBA’s

    • Hi, your info is not right… for all of you who have installed CNA in ESXi: You have to manualy install the networkdrivers using vihostupdate….. There are CNA-Networkdrivers available (Version 1.0.0.43) from VMWare and QLOGIC (Search for QLE8142) !!!

Leave a Reply

%d bloggers like this: