Showing posts with label vmware. Show all posts
Showing posts with label vmware. Show all posts
Installing VMware ESXi 5.5 from USB
Wednesday, September 25, 2013
at
2:42 PM
| Posted by
Jared Valentine
Quick note to say that unetbootin can take the "VMware-VMvisor-Installer-5.5.0-1331820.x86_64.iso" image file and write directly to a USB key/memory stick. That memory stick, in turn, is bootable and can be used to install the ESXi hypervisor on a compatible system.
I used it to install from a USB memory stick to a different USB memory stick, which frees up the local SSD and HDD for use with the VSAN beta.
This wasn't always the case with previous versions of ESXi. Glad to see it's straight-forward now. Time to go play!
I used it to install from a USB memory stick to a different USB memory stick, which frees up the local SSD and HDD for use with the VSAN beta.
This wasn't always the case with previous versions of ESXi. Glad to see it's straight-forward now. Time to go play!
Posted In
usb,
vmware
|
0
comments
|
Installing VMware vSphere Client in Windows 8.1 Preview / Windows Blue
Friday, July 5, 2013
at
4:22 PM
| Posted by
Jared Valentine
I've been playing with the Microsoft Windows 8.1 Preview (aka Windows Blue) and had problems installing VMware's vSphere Client. After clicking "I agree", the install window disappeared, providing no helpful information as to why it failed. I hate the vSphere web client, and would much rather use the thick client to manage my home lab ESXi 5.1 environment.
The cause of issue is that the .NET framework v3.5 is not installed in Windows 8 by default, and is a required component for the vSphere client. I assume that the redistributable is packaged with the vSphere client installer, but it doesn't work in Windows 8.1.
The .NET Framework v3.5 is easy to install, just issue the following command from an elevated/administrator command prompt:
Dism /online /enable-feature /featurename:NetFx3 /All /Source:F:\sources\sxs /LimitAccess
(replace F:\ with the DVD drive where your Windows 8.1 is located. If your DVD drive is as slow as mine, just copy the ISO to your hard drive, mount it with daemon-tools, and use that location instead.)
After that, the vSphere client happily installed and I'm off to play with Windows 8.1 now.
The cause of issue is that the .NET framework v3.5 is not installed in Windows 8 by default, and is a required component for the vSphere client. I assume that the redistributable is packaged with the vSphere client installer, but it doesn't work in Windows 8.1.
The .NET Framework v3.5 is easy to install, just issue the following command from an elevated/administrator command prompt:
Dism /online /enable-feature /featurename:NetFx3 /All /Source:F:\sources\sxs /LimitAccess
(replace F:\ with the DVD drive where your Windows 8.1 is located. If your DVD drive is as slow as mine, just copy the ISO to your hard drive, mount it with daemon-tools, and use that location instead.)
After that, the vSphere client happily installed and I'm off to play with Windows 8.1 now.
Posted In
vmware,
Windows 8.1
|
1 comments
|
Configure vCenter to use Wake-On-LAN for DPM
Tuesday, June 4, 2013
at
5:31 PM
| Posted by
Jared Valentine
I'm labbing-up VMware's DPM (Dynamic Power Management) technology this week. This is a self-funded home lab, and I'm too cheap to use enterprise-grade hardware. That means technologies like IPMI/iLO are out of the question. The good news is that both of my lab hosts support Wake-on-LAN (WOL).
I went into the BIOS of each of the ESXi hosts and enabled Wake-On-LAN. I placed both of them into a cluster and enabled DPM. I then attempted to place one of the hosts into standby mode. The vCenter client complained with the following message:
That last line really bugs me because there is no WOL-specific configuration options in vCenter. After a little digging around (RTFM) I read that vCenter must be able to send the WOL "magic packet" to a physical NIC where vMotion is enabled. I had not yet enabled vMotion on these two lab hosts!
After enabling vMotion on my hosts, I was able to successfully place them into standby mode. Even more importantly was the fact that vCenter could wake them back up again.
I observed one other quirky behavior with DPM. When attempting to manually power-on the sleeping host, vCenter's "please wait" message reads:
This comes AFTER I get confirmation that the "Host is in Standby Mode". It could be that vCenter still isn't sure if the machine is fully powered-off just yet, so it's waiting an arbitrary amount of time before sending the WOL packets to my host. I'll have to play with it a little more and see if vCenter gives the same misleading "please wait" message when powering-on a host that has been sleeping for a while.
I went into the BIOS of each of the ESXi hosts and enabled Wake-On-LAN. I placed both of them into a cluster and enabled DPM. I then attempted to place one of the hosts into standby mode. The vCenter client complained with the following message:
vCenter has determined that it cannot resume host lab1.example.com from standby; therefore, the enter standby task was stopped. Confirm that IPMI/iLO is correctly configured for host lab1.example.com. Or, configure vCenter to use Wake-On-LAN, ensuring there are at least two or more hosts in the cluster, and try the task again.
That last line really bugs me because there is no WOL-specific configuration options in vCenter. After a little digging around (RTFM) I read that vCenter must be able to send the WOL "magic packet" to a physical NIC where vMotion is enabled. I had not yet enabled vMotion on these two lab hosts!
After enabling vMotion on my hosts, I was able to successfully place them into standby mode. Even more importantly was the fact that vCenter could wake them back up again.
I observed one other quirky behavior with DPM. When attempting to manually power-on the sleeping host, vCenter's "please wait" message reads:
Waiting for host to power off before trying to power it on
This comes AFTER I get confirmation that the "Host is in Standby Mode". It could be that vCenter still isn't sure if the machine is fully powered-off just yet, so it's waiting an arbitrary amount of time before sending the WOL packets to my host. I'll have to play with it a little more and see if vCenter gives the same misleading "please wait" message when powering-on a host that has been sleeping for a while.
Posted In
vmware
|
0
comments
|
Active Twinax with Intel 10GbE NICs?
Friday, September 2, 2011
at
11:46 AM
| Posted by
Jared Valentine
"There are two kinds of people in this world..." I hear this from my father-in-law on a weekly basis. It's usually followed by some quip like "the quick and the hungry".
Twinax cabling (aka direct-attach SFP+) is a cost-effective way to interconnect switches, servers, and storage at 10GbE. Current implementations are limited between 5M and 10M. The economics are quite compelling when compared with discrete SFP+ optics. Well, there are 2 kinds of 10GbE twinax cables in this world: active and passive. (My father-in-law is beaming right now). Unfortunately, not all NICs or switches support both standards.
I was helping a client to migrate their VMware ESX 4.x servers to redundant 10GbE interfaces. They had Dell servers with OEM Intel NICs. They had done some initial testing with Dell's 24-port 10GbE switch, but ended up selecting Brocade's VDX6720 Ethernet Fabric network for redundancy and growth prospects. Once we had everything cabled and powered-up, there were connectivity issues. No link-up and the VMware host gave us purple-screens on shutdown.
A little poking in the /var/log/vmkernel logfile showed this:
"vmnic8: ixgbe_sfp_config_module_task: failed to load because an unsupported SFP+ module type was detected."
Well, come to find out that Intel's 10 Gigabit AF DA Dual Port Server Adapter isn't compatible with the active twinax cables that came with the Brocade VDX switches. The VDX switches don't support the passive cable that came with the Intel 85298-based 10GbE NICs.
This customer also had a handful of the newer Intel Ethernet Server Adapter X520 Series sporting the newer Intel 85299 ASIC. We still had problems getting things working. A little more digging shows that active twinax support for the x520 series was a semi-recent addition and according to Intel, requires driver version 15.3 or later. We spun up a copy of vSphere 5.0 hoping that it included a newer version of ixgbe. It sure does and we now have 10GbE over active twinax between the Brocade VDX6720s and the Intel x520 NICs.
Moral of the story: get your active/passive cables straight. Make sure your NIC & switch support the same standard. Make sure your drivers are recent enough (at least for the x520) to support the correct types of cable.
Twinax cabling (aka direct-attach SFP+) is a cost-effective way to interconnect switches, servers, and storage at 10GbE. Current implementations are limited between 5M and 10M. The economics are quite compelling when compared with discrete SFP+ optics. Well, there are 2 kinds of 10GbE twinax cables in this world: active and passive. (My father-in-law is beaming right now). Unfortunately, not all NICs or switches support both standards.
I was helping a client to migrate their VMware ESX 4.x servers to redundant 10GbE interfaces. They had Dell servers with OEM Intel NICs. They had done some initial testing with Dell's 24-port 10GbE switch, but ended up selecting Brocade's VDX6720 Ethernet Fabric network for redundancy and growth prospects. Once we had everything cabled and powered-up, there were connectivity issues. No link-up and the VMware host gave us purple-screens on shutdown.
A little poking in the /var/log/vmkernel logfile showed this:
"vmnic8: ixgbe_sfp_config_module_task: failed to load because an unsupported SFP+ module type was detected."
Well, come to find out that Intel's 10 Gigabit AF DA Dual Port Server Adapter isn't compatible with the active twinax cables that came with the Brocade VDX switches. The VDX switches don't support the passive cable that came with the Intel 85298-based 10GbE NICs.
This customer also had a handful of the newer Intel Ethernet Server Adapter X520 Series sporting the newer Intel 85299 ASIC. We still had problems getting things working. A little more digging shows that active twinax support for the x520 series was a semi-recent addition and according to Intel, requires driver version 15.3 or later. We spun up a copy of vSphere 5.0 hoping that it included a newer version of ixgbe. It sure does and we now have 10GbE over active twinax between the Brocade VDX6720s and the Intel x520 NICs.
Moral of the story: get your active/passive cables straight. Make sure your NIC & switch support the same standard. Make sure your drivers are recent enough (at least for the x520) to support the correct types of cable.
Posted In
10GbE,
Brocade,
Dell,
Ethernet Fabrics,
Intel,
twinax,
VDX,
vmware
|
0
comments
|
Error 800700c1 with Veeam FastSCP & 64-bit Windows
Monday, April 25, 2011
at
7:25 AM
| Posted by
Jared Valentine

Veeam FastSCP is an excellent utility to move files on/off your ESXi host. It's 2x faster than just NFS-mounting the datastore directly from my Windows box.
I was trying to write some files to my datastore recently and got this error message:
“Retrieving the COM class factory for component with CLSID [5F1555F0-0DBB-47F6-B10B-0AB0E1C1D8CE} failed due to the following error: 800700c1"
The short fix is detailed at Everything-Virtual. Hope that helps someone.
Posted In
Backup,
vmware,
Windows 7
|
0
comments
|
VMware ESXi 4.1 & VT-d w/ Supermicro X8SAX
Sunday, February 20, 2011
at
9:57 PM
| Posted by
Jared Valentine

I currently have 2 servers in the house... One of them is a NexentaStor storage appliance, and the other is a Windows 7 box (also running VMware Server 2.0 with a handful of VMs). I was interested in consolidating the two of them into a single server since neither box was being heavily utilized. After reading this page, I decided to base the single server off of VMware's ESXi product. The first VM would need to be the storage appliance, and then the rest of the VMs would NFS mount the storage appliance and boot from there. This setup should handle everything I need it to, and if there was a catastrophic failure, I could pull the drives and put into another ZFS system and recover the data.
VMware doesn't make it easy to create Raw Device Mappings using SATA disks. That's okay, ESXi has a feature which allows you to pass through hardware on the host machine directly to the guest. This sounds great. I planned on passing a couple of Intel SASMF8I LSI1068-based controllers to the storage VM. I have a Supermicro X8SAX motherboard, Intel i7-930 CPU, and 24GB of RAM (6x4GB DDR3).
In order to enable hardware passthrough, you have to go into the BIOS and enable Intel VT-d. Unfortunately, any time I enabled this feature ESXi refused to install. If I installed ESXi first and then enabled the feature, it refused to complete booting. I went through all sorts of BIOS settings, ACPI, Intel Virtualization technology, etc. and couldn't find a combination that worked. I upgraded the bios from 1.1a to 2.0 and that still didn't work. I tried ESXi 4.1 along with 4.1U1.
What finally worked was downgrading the BIOS all the way back to 1.0c! The only downside so far is that it only recognizes 20GB of RAM instead of 24GB. Other than that, the hardware passthrough works perfectly. 1.0c isn't available on Supermicro's website. Good luck in finding it... I grabbed it from some website in China after some extensive searching at google.
I've got an e-mail into Supermicro asking why. I'll edit this post if I get an explanation from them.
Posted In
vmware
|
5
comments
|
Installing VMWare ESXi v4.1 from USB Stick
Tuesday, January 25, 2011
at
4:52 PM
| Posted by
Jared Valentine
HERE is how you can install VMWare ESXi v4.1 from a USB memory stick. I had previously tried just writing the .ISO image using unetbootin, but that didn't work. This other method works great, though!
Posted In
usb,
vmware
|
0
comments
|
Subscribe to:
Posts (Atom)
