1

First I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays:

  • 2x320GB RAID1 - used for operating systems md126
  • 2x1TB RAID1 - used for data md125

I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only":

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md125 : active (auto-read-only) raid1 sdc[1] sdd[0]
      976759808 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sdc[1](S) sdd[0](S)
      4514 blocks super external:imsm

md126 : active raid1 sda[1] sdb[0]
      312566784 blocks super external:/md1/0 [2/2] [UU]

md1 : inactive sdb[1](S) sda[0](S)
      4514 blocks super external:imsm

unused devices: <none>
[root@localhost ~]#

And the partition in it was not available as device special file in /dev:

[root@localhost ~]# ls -l /dev/md125*
brw-rw---- 1 root disk 9, 125 Jan  6 15:50 /dev/md125
[root@localhost ~]#

But the partition is there according to fdisk:

[root@localhost ~]# fdisk -l /dev/md125

Disk /dev/md125: 1000.2 GB, 1000202043392 bytes
19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1b238ea9

      Device Boot      Start         End      Blocks   Id  System
/dev/md125p1            2048  1953519615   976758784   83  Linux
[root@localhost ~]# 

I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot.

So, my question is - How do I make the array automatically available after reboot?

Here is some additional information:

First I am able to see the UUID of the array in blkid:

[root@localhost ~]# blkid 
/dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" 
/dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" 
/dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" 
/dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" 
/dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" 
/dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" 
/dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" 
/dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" 
/dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" 
/dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" 
/dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda: TYPE="isw_raid_member" 
/dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" 
[root@localhost ~]# 

1 Answer 1

0

Reading your issue it reminds me that I had a similar issue which I could solve to run fdisk for the each of the devices and select change type (t) then fd (linux raid autodetect) and the write (w).

I also had to add the raid (in my case md3) to the /etc/mdadm.conf however it seems to be there already in your case. What I did to perform this was:

mdadm -Q --examine /dev/sdb1

Then retrieve the UUID from this and use in:

mdadm -A -u 6f14c076:4b61f2e9:17138dff:69d83514 /dev/md3

this will detect md3 config and start raid

mdadm --examine --scan >>/etc/mdadm.conf

detects all 4 md devices and stores.

vi /etc/mdadm.conf

modified to add my last md3 line in it

Check with pvs, vgs and lvs.

You must log in to answer this question.