0

I'm trying to assemble a RAID1 array using mdadm. I'd like to be able to debug this error, but I'm not sure where the error actually is, other than "Invalid Argument".

Here's the main log. I can't tell what the error is, other than "Invalid Argument". It was working before after multiple reboots, but now on this reboot it doesn't work.

> mdadm --assemble --scan
mdadm: looking for devices for /dev/md1
mdadm: UUID differs from /dev/md/0.
mdadm: no RAID superblock on /dev/sdi
mdadm: UUID differs from /dev/md/0.
mdadm: no RAID superblock on /dev/sdh
mdadm: UUID differs from /dev/md/0.
mdadm: no RAID superblock on /dev/sdg
mdadm: no RAID superblock on /dev/md/0
mdadm: /dev/sdi1 is identified as a member of /dev/md1, slot 1 replacement.
mdadm: /dev/sdh1 is identified as a member of /dev/md1, slot 1.
mdadm: /dev/sdg1 is identified as a member of /dev/md1, slot 0.
mdadm: added /dev/sdh1 to /dev/md1 as 1 (possibly out of date)
mdadm: added /dev/sdi1 to /dev/md1 as 1 (possibly out of date) replacement
mdadm: added /dev/sdg1 to /dev/md1 as 0
mdadm: failed to RUN_ARRAY /dev/md1: Invalid argument

Here's what I don't totally understand. It's building /dev/md1, so why does it keep talking about the UUID for /dev/md0?

My mdadm.conf is:


#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR [email protected]

# definitions of existing MD arrays
#these are commented for some reason
#ARRAY /dev/md/1  metadata=1.2 UUID=450cec12:5c8a7248:3c93a59e:11ddc20e name=b:1
#ARRAY /dev/md/0  metadata=1.2 UUID=aa81a619:c1913636:1b48fbc0:11328059 name=b:0

# This file was auto-generated on Tue, 28 Jan 2020 20:33:10 -0500
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 name=b:0 UUID=aa81a619:c1913636:1b48fbc0:11328059
ARRAY /dev/md1 metadata=1.2 name=t:1 UUID=90370f2d:ca2757ed:46ff3b47:c8df472f

Did I edit something in my mdadm.conf a while back to make this not work? Like, /dev/md/1 is commented out (is that okay?).

1 Answer 1

0

The problem here was kinda weird. I had a spare hard drive with the RAID that mdadm detected as having bad blocks. The way I found out what was happening was the following:

mdadm --examine /dev/md1 /dev/sdd1 /dev/sdb1 /dev/sde1

/dev/sdd1:
   ...
   Creation Time : Sat Feb  1 12:05:48 2020
   Raid Level : raid1
   Raid Devices : 2
   ...
   Bad Block Log : 512 entries available at offset 72 sectors
   ...
   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sdb1:
    ...
  Bad Block Log : 512 entries available at offset 72 sectors
    ...


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sde1:
         ...
  Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
  ..
   Device Role : Replacement device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

Upon further reflection, the error message I got was extremely unhelpful, given what was the problem.

When I removed /dev/sde from the computer, and unmounted / stopped and remounted everything, I had to --add /dev/sdb1 back, then it did one of those rebuild processes (was very quick), then it worked.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .