0

I used mdadm to create a RAID 1 array using two 3TB drives. After the process, which took all night, I found that the two 3TB drives, /sdb and /sdc, have disappeared from the file explorer. Rebooted the system and they reappeared,then disappeared again after another reboot,they seem to be corrupted with error found in GParted where they can be found:

Corrupt extent header while reading journal super block</i>

<i>Unable to read the contents of this file system!
Because of this, some operations may be unavailable.
The cause might be a missing software package.
The following list of software packages is required for ext4 file system support:  e2fsprogs v1.41

I called the new RAID array md0, which now has a folder in /mnt/md0, which is empty.

There is a conf file in /etc/mdadm which reads:

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Mon, 24 Dec 2018 02:28:48 -0500 by mkconf
ARRAY /dev/md0 metadata=1.2 name=dna-computer:0 UUID=df25e6e6:cccb8138:aa9f4538:31608c33

Not sure if this helps but the command cat /proc/mdstat reads:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>

1 Answer 1

0

This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.

As stated in the manual page of mdadm:

RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or partitions thereof) to be combined into a single device to hold (for example) a single filesystem.

Also see for example this tutorial.

4
  • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss. Commented Dec 31, 2018 at 23:01
  • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.
    – agtoever
    Commented Dec 31, 2018 at 23:43
  • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive? Commented Jan 4, 2019 at 20:39
  • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.
    – agtoever
    Commented Jan 4, 2019 at 22:34

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .