1

Upgraded a Linux box and changed out the root drives. There was a RAID-5 array of three SATA drives that I moved over (not root). Reinstalled the OS but was CentOS 6.4 before and after.

# mdadm --assemble /dev/md1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: cannot open device /dev/sdc1: No such file or directory
mdadm: /dev/sdc1 has no superblock - assembly aborted

And true enough, there's no /dev/sdc1.

The partition does exist:

# fdisk -l /dev/sdc

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cca42

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1      121601   976760001   83  Linux

The drives show up in BIOS, and obviously I can fdisk them so they are working. But why wouldn't Linux create devices for them?

I saw this:

Partition is missing in /dev

However, it doesn't quite apply. In my case, the mobo before and after were Intel RAID Matrix mobos, but I've never used Intel's RAID - always used mdadm and did RAID in the kernel.

And when I do the examination:

# mdadm -Evvv /dev/sdc
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   1953520002 sectors at           63 (type 83)

There's nothing in /dev/mapper that I could see. The OS is current as of the 6.4 dist (haven't done a yum update yet).

So...how can I get /dev/sdc1 to show up to the OS?

The same problem for /dev/sdd1 and /dev/sde1, which are the two other drives from the old array. /dev/sdf1 through /dev/sdi1 (which are new) all work fine.

2 Answers 2

2

You must have used the drives in the Intel fakeraid at some point in the past, and simply disabled the raid bios. This left the fakeraid signatures on the drives, which dmraid recognizes and hides the partitions, since you aren't supposed to touch them but through the dmraid device. Use dmraid -E to erase the fakeraid signatures on the drives.

1
  • Right, they were used in fakeraid but I wasn't doing "hardware" raid - just mdadm. In other words, the previous mobo presented them as normal drives, not RAID. I didn't know it would leave sticky fingerprints on them...thanks.
    – raindog308
    Commented Sep 1, 2013 at 15:03
1

yum update didn't fix it.

The solution was: add 'nodmraid' to the kernel's boot line:

title CentOS (2.6.32-358.14.1.el6.x86_64)
        root (hd1,0)
        kernel /boot/vmlinuz-2.6.32-358.14.1.el6.x86_64 ro root=UUID=bcc55ef9-43b4-4938-a1a6-9ccd1f9be1f8 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 nodmraid rd_MD_UUID=e5431160:92d33565:164c859f:ee1f94e6 SYSFONT=latarcyrheb-sun16 quiet rd_NO_LVM rd_NO_DM crashkernel=auto
        initrd /boot/initramfs-2.6.32-358.14.1.el6.x86_64.img

I'd forgotten that I had that set on the previous box. I'm not entirely certain why device mapper did what it did but...this stopped it :-)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .