A client has a number of old Linux systems, dating from between five and fifteen years ago, still running on the original hardware. Due to concerns about the age of the disks, etc, they would like to have them all virtualised onto VMware vHosts.
I have been able to virtualise Windows machines of almost any 21st century age via VMware's Standalone Converter, so far with a 100% success rate. However attempting to do the same with the Linux machines invariably results in failure, and a VM that won't boot, usually giving an error along the lines of "kernel panic, can't start /init".
I've found that if I mount the ISO of a rescue CD and boot off that, and then chose "boot Linux from the hard disk" the systems will boot and I can log in, however that leaves them running the rescue CD's kernel instead of the on-board one, which then leads to failures when attempting for instance to re-add the dummy interfaces on a radius server - attempting to run "modprobe dummy" the process bombs out with:
FATAL: Could not load /lib/modules/3.14.50-std460-amd64/modules.dep: No such file or directory
On examination of the /lib/modules directory the file that is present is:
/lib/modules/2.6.27.7-9-pae/modules.dep
Which matches what uname -r returns on the original physical machine:
uname -r
2.6.27.7-9-pae
On the P2V VM booted from the rescue CD it gives
uname -r
3.14.50-std460-amd64
On the physical machine init is in /sbin/init and the root file system is /dev/sda2:
rad02:~ # which init
/sbin/init
rad02:~ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G 3.2G 16G 17% /
udev 243M 7.6M 236M 4% /dev
/dev/sda3 52G 15G 35G 30% /home
/dev/sdb1 74G 7.1G 63G 11% /var/log
On the VM I've tried booting off the hard disk and adding root=/dev/sda2 init=/sbin/init
at boot loader time, and I've seen the machine appear to attempt to start both /init and /sbin/init - but still fail with a "kernel panic, cannot start init" error.
This particular original machine is running openSUSE 11.1 (i586)
, but I'm hoping for a general answer as there are a variety of RedHat, SuSE, and openSUSE systems I'd like to virtualise for this client.
What do I need to do to get the P2V'd VMs to start init and boot successfully?
Edit: Ok, thanks to those who commented I now better understand the problem - Grub can see the disks fine, but the actual kernel system cannot, and this is most likely to be down to a missing controller driver in the /etc/sysconfig/kernel file's initrd_modules line.
Here's what's on the original physical host:
INITRD_MODULES="processor thermal ata_piix ata_generic piix ide_pci_generic fan jbd ext3 edd"
Here's what the P2V conversion had put in the VM's file:
INITRD_MODULES="mptscsih mptspi mptsas scsi_transport_spi scsi_transport_sas BusLogic ahci pcnet32 processor thermal ata_piix ata_generic piix ide_pci_generic fan jbd ext3"
And after downloading an openSUSE 11.1 install DVD and running the "repair installed system" option, that had been changed to this:
INITRD_MODULES="ahci mptsas ata_piix ata_generic piix ide_pci_generic jbd ext3"
While booted from the earlier rescue CD I did a locate on all the modules listed and everything bar the ide_pci_generic driver were present - given that VMware gives SATA Lsi Logic as the standard disk type I assume it won'd be using an IDE driver?
I have another P2V'd VM running openSUSE 10 which initially had the same problem with refusing to boot, but then, after being left powered off for several months, surprisingly booted correctly (and has been rebooted several times, always successfully), looking in /etc/sysconfig/kernel on that one I get:
INITRD_MODULES="mptscsih mptspi scsi_transport_spi BusLogic pcnet32 piix ata_piix processor thermal fan jbd ext3"
So where do I go from here?
Edit 2:
The ticked answer from A.B below resolved this problem.
Following the directions given, and using a spare VM which had been installed fresh using the same Linux version as the machine we were trying to p2v, I created three directories under /tmp, physical, virtual, and combo. Following the directions I pulled the initrd and System.map files from the physical machine, and unpacked those under /tmp/physical, then I pulled the same files from the VM I was on (i.e. a working VM of the same OS) and unpacked those in /tmp/virtual.
Out of curiosity I did a diff on the output of du for each directory, thus:
cd /tmp/physical
du > /tmp/ph.txt
cd /tmp/virtual
du > /tmp/vi.txt
cd /tmp
cat ph.txt |cut -d'.' -f2,3,4,5,6 |sort > ph-sorted.txt
cat vi.txt |cut -d'.' -f2,3,4,5,6 |sort > vi-sorted.txt
diff ph-sorted.txt vi-sorted.txt
Which produced this output - very little difference, but a few files:
21,22d20
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/hid
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/hid/usbhid
26c24,25
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/input
---
> /lib/modules/2.6.27.7-9-pae/kernel/drivers/message
> /lib/modules/2.6.27.7-9-pae/kernel/drivers/message/fusion
29,31d27
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb/core
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb/host
I then copied the complete contents of both /tmp/physical and /tmp/virtual into /tmp/combo (with the virtual one coming second so would over-write any conflicts).
I then did (as instructed in answer below)
depmod -b /tmp/combo -F /tmp/combo/System.map-2.6.27.7-9-pae -v 2.6.27.7-9-pae
which threw out screenfuls of dependency errors but otherwise ran OK, followed by
cd /tmp/combo
find . -print0 | cpio -o -0 -H newc |gzip -9 > /tmp/initrd-2.6.27.7-9-pae
I booted the failed p2v off the rescue CD and put it on the network, copied initrd-2.6.27.7-9-pae
to /boot on it, detached the rescue CD ISO and rebooted - and it worked! openSUSE 11.1 running happily on the p2v VM, and services appear to work normally - success!
chroot
from the rescue CD. Depending on what you find, it may be possible to patch the VM to boot without the rescue CD.