0

I recently got a new dedicated server and I'm trying to mount all the drives it contains.

$ sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME    FSTYPE               SIZE MOUNTPOINT        LABEL
loop0   squashfs            31.1M /snap/snapd/11036 
loop1   squashfs            31.1M                   
loop2   squashfs            55.5M /snap/core18/1988 
loop3   squashfs            69.8M /snap/lxd/19032   
loop4   squashfs            55.4M /snap/core18/1944 
loop5   squashfs            31.1M /snap/snapd/10707 
loop6   squashfs            69.9M /snap/lxd/19188   
sda                          1.8T                   
├─sda1                     987.5K                   
├─sda2  linux_raid_member    1.8T                   md2
│ └─md2 ext4                 1.8T /                 root
├─sda3  swap                 512M [SWAP]            swap-sda3
└─sda4  iso9660           1007.5K                   config-2
sdb                          1.8T                   
├─sdb1                     987.5K                   
├─sdb2  linux_raid_member    1.8T                   md2
│ └─md2 ext4                 1.8T /                 root
└─sdb3  swap                 512M [SWAP]            swap-sdb3
sdc                          1.8T                   
├─sdc1                     987.5K                   
├─sdc2  linux_raid_member    1.8T                   md2
│ └─md2 ext4                 1.8T /                 root
└─sdc3  swap                 512M [SWAP]            swap-sdc3
sdd                          1.8T                   
├─sdd1                     987.5K                   
├─sdd2  linux_raid_member    1.8T                   md2
│ └─md2 ext4                 1.8T /                 root
└─sdd3  swap                 512M [SWAP]            swap-sdd3

As expected I found the four disk of my setup. Now I conclude only one is mounted because:

$ df
Filesystem      1K-blocks    Used  Available Use% Mounted on
udev             16288292       0   16288292   0% /dev
tmpfs             3261132    1268    3259864   1% /run
/dev/md2       1922082712 6531836 1817891636   1% /
tmpfs            16305644       0   16305644   0% /dev/shm
tmpfs                5120       0       5120   0% /run/lock 
tmpfs            16305644       0   16305644   0% /sys/fs/cgroup
/dev/loop4          56832   56832          0 100% /snap/core18/1944
/dev/loop5          31872   31872          0 100% /snap/snapd/10707
tmpfs             3261128       0    3261128   0% /run/user/1001
/dev/loop3          71552   71552          0 100% /snap/lxd/19032
/dev/loop6          71680   71680          0 100% /snap/lxd/19188
/dev/loop2          56832   56832          0 100% /snap/core18/1988
/dev/loop0          31872   31872          0 100% /snap/snapd/1103

So I want to mount the remaining drives, but I don't know for sure which is mounted or not as the Filesystem shown above is /dev/md2 and I cannot relate this with /dev/sda, /dev/sdb, /dev/sdc or /dev/sdd.

I imagine it concerns /dev/sda but I would like to know how to be sure about this?

1 Answer 1

1

the Filesystem shown above is /dev/md2 and I cannot relate this with /dev/sda, /dev/sdb, /dev/sdc or /dev/sdd.

Look closer at your 'lsblk' output. The md2 device is actually shown there, as a child device (which means it is a virtual device built on top of sda2, which itself is a partition on sda).

Also notice that in your 'lsblk' output, the same md2 is shown under all four disks, not just one. This means it is a "multi-disk" device which combines several disks into one virtual device – in other words, a 'md' device literally is a software RAID array.

So when 'findmnt' or 'df' show that /dev/md2 is mounted, that means all four disks are involved. You can use mdadm -D /dev/md2 to check what array type it is – Linux mdadm deals with traditional RAID, such as RAID 10 or RAID 6, which will be shown at "Raid Level:".


If your hosting provider has created the array with some undesirable RAID profile (e.g. RAID 1 mirroring across all four disks, on the extreme end of redundancy over capacity), you may be able to reshape it using:

# mdadm -G /dev/md2 --level=raid10
# watch cat /proc/mdstat

Though mdadm might require you to first convert RAID1 to RAID5, then RAID5 to RAID0, and then finally RAID0 to RAID10near2... See the guide to mdadm.

If that is not possible, you might need to:

  1. make sure you have console (KVM) access to the server;
  2. remove some disks from the old array and use them to create a new array (using RAID 0 temporarily);
  3. copy the entire OS to it and somehow reboot using the new array (e.g. editing the root= kernel command-line option);
  4. destroy the old array and add its disks to the new one, converting to RAID 10.

(You can choose to create the new array using something else than mdadm, e.g. LVM, ZFS, or multi-disk Btrfs. And if this is a brand-new server, you can of course reinstall it from scratch via KVM instead of trying to migrate the existing OS.)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .