After a few fits and starts, I decided I needed a bona fide server to do everything I’ve been wanting to do with virtualization. On Christmas day, I found a good ‘scratch and dent’ deal on a Dell T105 in their Outlet store. And then . . . I waited.
Until today. The server arrived, and I got right to work (whilst also doing some work with our latest acquisition, QCAP which is very slick, and can be found here: http://www.quest.com/cloudautomation/ ). Initially, I thought I was going to use VMware ESX 3.5, but after some consideration, and the fact that I had 64-bit hardware, I opted for ESXi 4.0. Which mean I had to build another USB boot drive, as I decided to use the internal USB adapter and dedicate the 2 SATA drives to being VM stores.
Up until now, I’ve actually been running 3 AD domain controllers at home, 2 of which were actually running on Macs using Fusion. The third, which ran on a Windows XP laptop, was extremely quirky, and seemed to fall off the network quite a bit. And this seemed to be the result of VMware’s bridged network settings within Workstation and Player.
After about an hour of configuring the hypervisor, configuring the AD integration, and getting comfortable with the environment, I decided to shut down DC02, and bring it over onto the server. DC01 is my main server, and that was going to stay up and running for at least a few more days until I was sure this cutover was feasible, but DC02 and DC03 were fair game. Surprisingly, the 6 GB disk took quite a while to bring over, even on a switched, and fairly idle network. So after about 45 minutes, I was able to get started, and turn on the VM. At which point, I got the following error:
Module DevicePowerOn power on failed. Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/<long GUID>/dc02/dc02.vmdk' Failed to open disk scsi0:0: Unsupported or invalid disk type 7. Make sure that the disk has been imported.
With a few quick Google searches, I found that because I had the disk set for a total of 24 GB, but only used 6 GB, ESX did not like this. There were quite a few posts on the topic, but this one was the most clear, and gave me exactly what I needed: http://blog.learnadmin.com/2010/09/solution-vmware-vm-import-failed-to.html . Until I found this article, I knew I had to import the VM using the “zeroedthick” argument for the “vmkfstools” command but that seemed like a lot of work, and I didn’t see the setting in the import UI. Thankfully, the article above let me know that I could SSH into the box (yes, I set it up for remote Tech Support), and run the following commands:
cd /vmfs/volumes/<long GUID>/dc02/ vmfstools -i dc02.vmdk -d zeroedthick dc02-1.vmdk rm dc02-s* rm dc02.vmdk mv dc02-1-flat.vmdk dc02-flat.vmdk mv dc02-1.vmdk dc02.vmdk vi dc02.vmdk
I actually had to edit the dc02.vmdk file because of the line that reads ‘mv dc02-1-flat.vmdk dc02-flat.vmdk’. I wanted to get rid of the -1 entry in the new file names. There were a few other quirks during the ‘New VM’ dialog, such as the new VM had to have the same name as the old VM, but I got past it, and got everything set up. One other thing I learned – don’t change the SCSI controller to the SAS one, but stay with the LSI Parallel one. I was hoping to use the ‘latest and greatest’ and got into a blue screen reboot. After all this, the VM seems to have come up, and is running. It now has a new NIC (ESX didn’t like the MAC address I originally created so I had to add a new card) and I’m going to wait it out a day or two before I do any more.
Feel free to drop me a line if you have questions, or have other suggestions. I’ll keep updating this as I make progress in the conversion to using ESXi.
Comments on this entry are closed.