1

A client has a number of old Linux systems, dating from between five and fifteen years ago, still running on the original hardware. Due to concerns about the age of the disks, etc, they would like to have them all virtualised onto VMware vHosts.

I have been able to virtualise Windows machines of almost any 21st century age via VMware's Standalone Converter, so far with a 100% success rate. However attempting to do the same with the Linux machines invariably results in failure, and a VM that won't boot, usually giving an error along the lines of "kernel panic, can't start /init".

I've found that if I mount the ISO of a rescue CD and boot off that, and then chose "boot Linux from the hard disk" the systems will boot and I can log in, however that leaves them running the rescue CD's kernel instead of the on-board one, which then leads to failures when attempting for instance to re-add the dummy interfaces on a radius server - attempting to run "modprobe dummy" the process bombs out with:

FATAL: Could not load /lib/modules/3.14.50-std460-amd64/modules.dep: No such file or directory

On examination of the /lib/modules directory the file that is present is:

/lib/modules/2.6.27.7-9-pae/modules.dep

Which matches what uname -r returns on the original physical machine:

uname -r
2.6.27.7-9-pae

On the P2V VM booted from the rescue CD it gives

uname -r
3.14.50-std460-amd64

On the physical machine init is in /sbin/init and the root file system is /dev/sda2:

rad02:~ # which init
/sbin/init
rad02:~ # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              20G  3.2G   16G  17% /
udev                  243M  7.6M  236M   4% /dev
/dev/sda3              52G   15G   35G  30% /home
/dev/sdb1              74G  7.1G   63G  11% /var/log

On the VM I've tried booting off the hard disk and adding root=/dev/sda2 init=/sbin/init at boot loader time, and I've seen the machine appear to attempt to start both /init and /sbin/init - but still fail with a "kernel panic, cannot start init" error.

This particular original machine is running openSUSE 11.1 (i586), but I'm hoping for a general answer as there are a variety of RedHat, SuSE, and openSUSE systems I'd like to virtualise for this client.

What do I need to do to get the P2V'd VMs to start init and boot successfully?

Edit: Ok, thanks to those who commented I now better understand the problem - Grub can see the disks fine, but the actual kernel system cannot, and this is most likely to be down to a missing controller driver in the /etc/sysconfig/kernel file's initrd_modules line.

Here's what's on the original physical host:

INITRD_MODULES="processor thermal ata_piix ata_generic piix ide_pci_generic fan jbd ext3 edd"

Here's what the P2V conversion had put in the VM's file:

INITRD_MODULES="mptscsih mptspi mptsas scsi_transport_spi scsi_transport_sas BusLogic ahci pcnet32 processor thermal ata_piix ata_generic piix ide_pci_generic fan jbd ext3"

And after downloading an openSUSE 11.1 install DVD and running the "repair installed system" option, that had been changed to this:

INITRD_MODULES="ahci mptsas ata_piix ata_generic piix ide_pci_generic jbd ext3"

While booted from the earlier rescue CD I did a locate on all the modules listed and everything bar the ide_pci_generic driver were present - given that VMware gives SATA Lsi Logic as the standard disk type I assume it won'd be using an IDE driver?

I have another P2V'd VM running openSUSE 10 which initially had the same problem with refusing to boot, but then, after being left powered off for several months, surprisingly booted correctly (and has been rebooted several times, always successfully), looking in /etc/sysconfig/kernel on that one I get:

INITRD_MODULES="mptscsih mptspi scsi_transport_spi BusLogic pcnet32 piix ata_piix processor thermal fan jbd ext3"

So where do I go from here?

Edit 2:

The ticked answer from A.B below resolved this problem.

Following the directions given, and using a spare VM which had been installed fresh using the same Linux version as the machine we were trying to p2v, I created three directories under /tmp, physical, virtual, and combo. Following the directions I pulled the initrd and System.map files from the physical machine, and unpacked those under /tmp/physical, then I pulled the same files from the VM I was on (i.e. a working VM of the same OS) and unpacked those in /tmp/virtual.

Out of curiosity I did a diff on the output of du for each directory, thus:

cd /tmp/physical
du > /tmp/ph.txt
cd /tmp/virtual
du > /tmp/vi.txt
cd /tmp
cat ph.txt |cut -d'.' -f2,3,4,5,6 |sort > ph-sorted.txt 
cat vi.txt |cut -d'.' -f2,3,4,5,6 |sort > vi-sorted.txt
diff ph-sorted.txt vi-sorted.txt

Which produced this output - very little difference, but a few files:

21,22d20
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/hid
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/hid/usbhid
26c24,25
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/input
---
> /lib/modules/2.6.27.7-9-pae/kernel/drivers/message
> /lib/modules/2.6.27.7-9-pae/kernel/drivers/message/fusion
29,31d27
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb/core
< /lib/modules/2.6.27.7-9-pae/kernel/drivers/usb/host

I then copied the complete contents of both /tmp/physical and /tmp/virtual into /tmp/combo (with the virtual one coming second so would over-write any conflicts).

I then did (as instructed in answer below)

depmod -b /tmp/combo -F /tmp/combo/System.map-2.6.27.7-9-pae -v 2.6.27.7-9-pae

which threw out screenfuls of dependency errors but otherwise ran OK, followed by

cd /tmp/combo
find . -print0 | cpio -o -0 -H newc |gzip -9 > /tmp/initrd-2.6.27.7-9-pae

I booted the failed p2v off the rescue CD and put it on the network, copied initrd-2.6.27.7-9-pae to /boot on it, detached the rescue CD ISO and rebooted - and it worked! openSUSE 11.1 running happily on the p2v VM, and services appear to work normally - success!

8
  • What about your disk controller hardware? If your physical boot's initramfs (or for very old, real ramdisk image) includes only a specific hardware controller's driver without including a generic scsi controller supported by vmware AND vmware doesn't modify the boot to cope with that, you end up with no disks detected. I got this problem on a HP server with some raid controller (was it cciss?) and the suse system installed only the minimum possible set of kernel drivers in the boot . I had to "open" the initramfs with cpio, copy missing drivers from system and put back the initramfs in the VM
    – A.B
    Commented Oct 12, 2016 at 17:18
  • You could try boot-repair. Down-load the 32-bit ISO and boot it as a virtual CD in the VM. You could also experiment with chroot from the rescue CD. Depending on what you find, it may be possible to patch the VM to boot without the rescue CD.
    – AFH
    Commented Oct 12, 2016 at 17:22
  • Also (and possibly first), make sure you are using only a single CPU, and try changing the virtualisation engine to one of the explicit values.
    – AFH
    Commented Oct 12, 2016 at 17:37
  • Ta for that, I've tried the boot-repair disk but it seems to fail because it can't find a repository for openSUSE 11.1's version of Grub - the command it asks me to copy and paste in, involving zypper, fails. @A.B I think the drivers are OK as when trying to boot normally the openSUSE start screen with the choice between standard and failsafe comes up OK - it's only afterwards, where it should start init, that everything derails.
    – Pyromancer
    Commented Oct 12, 2016 at 19:17
  • The boot menu is using grub and can read the disk. So it can read the initramfs (still called initrd for historical reasons), will put it in ram along the kernel. Kernel will run the fake init from the RAM filesystem. This fake init will run a few scripts (cryptfs lvm and alike) but will fail to find any disk. So the scripts can never execute the actual /sbin/init : everything derails
    – A.B
    Commented Oct 12, 2016 at 19:25

1 Answer 1

1

I happened to have a similar problem (missing driver, but here it's perhaps more caused by a change from IDE/ATA to SCSI), so from memory:

  • retrieve the /boot/initrd.img-2.6.27.7-9-pae file (from the physical server or with the rescue disk from the VM) as well as /boot/System.map-2.6.27.7-9-pae
  • put them in /tmp on any linux computer with gzip, cpio and depmod commands (and fakeroot or a root access)
  • become root or use fakeroot to simulate it (that's for cpio later)

    $ fakeroot
    # mkdir /tmp/cpio
    # cd /tmp/cpio
    # gunzip < /tmp/initrd.img-2.6.27.7-9-pae | cpio -i -m
    
  • The tricky part: figure out what driver can be missing in /tmp/cpio/lib/modules/2.6.27.7-9-pae/. You seem to have a candidate list. Some problems to foresee (and to try & correct): it seems your physical server is using ATA/IDE not SCSI. If you go from ATA to SCSI your drives will be changed from /dev/hda /dev/hdb ... to /dev/sda /dev/sdb ... and you'll get booting problems again (disks still not found). I think that's what you got.

    • either change the emulated hardware to match the previous hardware: use IDE/ATA, not SCSI (Buslogic). I'd do that. Perhaps then the Suse 11 rescue DVD is enough.
    • or be prepared to edit (in the VM booted with your rescue CD but perhaps also in the initrd somewhere else!) /etc/fstab to cope with all this by changing /dev/hdX into /dev/sdX. Because your installation is old, I wouldn't count on modern UUID= settings to solve this. I'd avoid this solution unless I knew every place to edit beside fstab

    If some driver is missing, it's either really missing or it's builtin (see depmod later). Copy from the physical server, or from the VM booted with the rescue CD or even from some Suse repository (if you're sure it's the very same version) the modules you have to add in /tmp/cpio/lib/modules/2.6.27.7-9-pae/ . Note that some modules have dependencies, in doubt put more than needed (as long as there is room in the VM's /boot ...)

  • then rebuid the modules dependency list

    # depmod -b /tmp/cpio -F /tmp/System.map-2.6.27.7-9-pae -v 2.6.27.7-9-pae
    

    you can check in /tmp/cpio/lib/modules/2.6.27.7-9-pae/modules.builtin if a missing module is actually there (directly in the kernel)

  • repack the tree (and overwrite the previous initrd file)

    # cd /tmp/cpio
    # find . -print0 | cpio -o -0 -H newc |gzip -9 > /tmp/initrd.img-2.6.27.7-9-pae
    

Put back this file in /boot on the VM. The VM should now boot correctly

2
  • Apologies for delayed response, was away for a few days. Thanks for the comprehensive answer, I will attempt to implement this later today and report back on results.
    – Pyromancer
    Commented Oct 18, 2016 at 13:26
  • The physical server is SCSI, it's a Dell PowerEdge SC1425 and the drive partitions are identified as /dev/sdaX. To implement the instructions above I took one of the failed P2V partitions and mounted the openSUSE 11.1 boot disk, and did a fresh install, but using the physical machine's disk layout and processor settings from the failed p2v. This installed correctly and boots quite happily, so this version of openSUSE is capable of running OK in a VM on this vHost. I've done the above in 2 dirs, one for files from physical, one for files from virtual, and will now attempt to combine them.
    – Pyromancer
    Commented Oct 18, 2016 at 19:17

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .