Upgraded a Linux box and changed out the root drives. There was a RAID-5 array of three SATA drives that I moved over (not root). Reinstalled the OS but was CentOS 6.4 before and after.
# mdadm --assemble /dev/md1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: cannot open device /dev/sdc1: No such file or directory
mdadm: /dev/sdc1 has no superblock - assembly aborted
And true enough, there's no /dev/sdc1.
The partition does exist:
# fdisk -l /dev/sdc
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cca42
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 121601 976760001 83 Linux
The drives show up in BIOS, and obviously I can fdisk them so they are working. But why wouldn't Linux create devices for them?
I saw this:
However, it doesn't quite apply. In my case, the mobo before and after were Intel RAID Matrix mobos, but I've never used Intel's RAID - always used mdadm and did RAID in the kernel.
And when I do the examination:
# mdadm -Evvv /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 1953520002 sectors at 63 (type 83)
There's nothing in /dev/mapper that I could see. The OS is current as of the 6.4 dist (haven't done a yum update yet).
So...how can I get /dev/sdc1 to show up to the OS?
The same problem for /dev/sdd1 and /dev/sde1, which are the two other drives from the old array. /dev/sdf1 through /dev/sdi1 (which are new) all work fine.