13
votes
Accepted
How can I remove this "zombie" mdadm array?
After some more digging, I found the combination of commands that fixed the issue:
sudo mdadm --stop /dev/md0
sudo mdadm --zero-superblock /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
10
votes
How to force mdadm to stop RAID5 array?
If you're using LVM on top of mdadm, sometimes LVM will not delete the Device Mapper devices when deactivating the volume group. You can delete it manually.
Ensure there's nothing in the output of ...
10
votes
Need to find which drives are mirrored within RAID-10 array
Recent versions of mdadm show this right in the details of the array. Example from mdadm v3.3 - 3rd September 2013
$ mdadm --detail /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Tue ...
8
votes
Converting LVM/EXT4 to ZFS without losing data
Short:
I think there is no in-place conversion of ext4 to ZFS.
For media servers I'd recommend SnapRAID instead
Edit:
Before I go deeper: Remember to use ECC RAM with ZFS.
SnapRAID is not that ...
5
votes
How-to change the name of an MD device (mdadm)
None of the other answers worked for me but in Centos I used the following guide. The issue is that /etc/mdadm.conf is not really used at boot time and only gets updated when a new kernel is ...
5
votes
How can I make mdadm auto-assemble RAID after each boot?
I had this problem on my Raspberry Pi 2 running Raspbian GNU/Linux 8 (jessie). I had a RAID array on /dev/sda1 and /dev/sdb1 which failed to assemble at boot. I had in my /etc/mdadm/mdadm.conf file ...
5
votes
How to get an inactive RAID device working again?
A simple way to get the array to run assuming there is no hardware problem and you have enough drives/partitions to start the array is the following:
md20 : inactive sdf1[2](S)
732442488 blocks ...
4
votes
Accepted
Can I create a degraded mdadm raid10-near (raid 1e) array?
I tried it with
mdadm --create /dev/md0 -n 3 -l 10 -p n2 /dev/sda2 /dev/sdb2 missing
...
mdadm md0 -a /dev/sdc2
and it seemed to work.
Since it operated in degraded mode I can assume data is ...
4
votes
Accepted
Ubuntu server remove extra partition and resize current larger in mdadm RAID1
Something is unclear in your description: how can /dev/sda1 be both in /dev/md2 and /dev/md3? Also, is this RAID1? What devices make each array?
To give you an idea of a possible sequence of steps, I ...
4
votes
Accepted
LVM: How should I attempt to recover from PV and possible LV corruption?
If any of this starts to not work, or stops making sense, STOP and ask a subject matter expert. This is unsafe work. Operate on disk images copied by "dd" to either files on a large storage ...
4
votes
What happens when you remove a drive from linux raid?
RAID 0 will implode as soon as a single disk fails, as the data is spread across the disks, regardless of the size the the partition/volume in comparison to the array (a volume that's 1/10th the size ...
3
votes
Accepted
RAID5 does not spindown after grow (ext4lazyinit)
ext4lazyinit is doing exactly what it says - it's initializing the rest of the filesystem in a lazy way. It does this to give the apperance of quickly producing a filesystem. As you have noticed, it's ...
3
votes
Accepted
Shrinking a RAID1 to free up space on the HDD for a new RAID1 Partition
Shrinking the RAID doesn't shrink the partitions - you have to do that manually.
This is not 100% trivial, as your MD raid may contain a RAID superblock at the beginning OR at the end.
If the RAID ...
3
votes
Degraded RAID5 after complete disk failure - What to do Now?
Basically it's just a simple
mdadm /dev/md127 --add /dev/newdrive
and then watch cat /proc/mdstat and/or dmesg -w for rebuild progress or failure.
The sooner you add a new drive to the array, the ...
3
votes
btrfs ontop of mdadm raid - calculating stripes for corrupt sectors for use with raid6check
Alright, I got a somewhat working way to do this after talking to JyZyXEL on #linux-raid on Freenode.
raid6check reports total stripes so run it like this to see the basic information without running ...
3
votes
"ext4lazyinit" running since 6 days on a new RAID5 array
I'm having the same problem. 24GB RAID5 array and I started a mkfs.ext4 yesterday. Leaving this here for anyone else who comes across this thread with the info I've found.
The easiest way to do this ...
2
votes
mdadm RAID Fast Setup with Empty Drives?
In general, a newly created array to enable device redundancy on zeroed disks would not need any prior syncing as long as the checksum (or copy, for RAID1) of those zeroed input blocks is also zero. ...
2
votes
Accepted
Linux mdadm does not assemble array, but recreation of array does it
Resolution
𝕎𝔸ℝℕ𝕀ℕ𝔾:
The instructions below delete your existing RAID setup and create a new md RAID 1 array with two entire block devices, /dev/sdc and /dev/sdd.
Ensure that your kernel has ...
2
votes
MDADM reshape really slow
In case anyone ends up here like I did, this command kicked the sync into high gear for me:
echo max | sudo tee /sys/block/md0/md/sync_max
2
votes
Accepted
MDADM raid "lost" after reboot
Well, it turns out that in a last hooray, I tried to re-run the "create" command I previously use to build the array in the first place and.....guess who got is data back!!
Let say I'm gonna backup ...
2
votes
Linux MD software raid stripe cache size
As I understand this is cache for completed stripe of data, ready to be written to disk including parity data.
Before stripe is written to disk it must be formed somewhere.
That depends. I usually ...
2
votes
Accepted
Why did mdadm disable a disk on my raid array?
The first step to diagnosing this would be to run S.M.A.R.T tests on the disk - something like
sudo smartctl -A /dev/sdX
To see what it self reports. You might also want to do long disk tests and ...
2
votes
Accepted
Software RAID Nonfunctional after power outage
After much research, I was able to restore my RAID array without apparent data loss.
I did ultimately have to use mdadm --create --assume-clean. I opted to use overlay files so that I could non-...
2
votes
mdadm raid has unknown filesystem after adding new disk
Preface
I am using mdadm and created a raid-5 with 3x 4TB drives (sda, sdb sdc)
Using raid5 with so big drives is about searching for troubles. You can find many warnings against doing that here ...
2
votes
Accepted
how is mdadm executed during startup?
mdadm installs several sets of udev rules, which trigger on device detection:
/usr/lib/udev/rules.d/01-md-raid-creating.rules
/usr/lib/udev/rules.d/63-md-raid-arrays.rules
/usr/lib/udev/rules.d/64-md-...
2
votes
Accepted
Ubuntu server RAID-5 failed, rebuilding fails
You should try with the --run flag:
Once an appropriate array is found or created and the device is
added, mdadm must decide if the array is ready to be started. It will
normally compare ...
2
votes
Rebuild a RAID5 on ZYXEL NAS 540
Good news!
I finally have my data back!
I tried to recover the superblock index with e2fsck using backups listed, but none of them worked :(
So I decided to come back to the old plan and try again ...
2
votes
How do I get mdadm to send its emails via msmtp?
I believe I have fixed the problem. For anyone else who runs across this:
Add both the following lines into /etc/mdadm/mdadm.conf
MAILADDR <recipient>
MAILFROM <sender>
Create a ...
2
votes
Disable warning from mdadm for “degraded” RAID1 with missing drive
The mdmonitor.service runs permanently and immediately notifies about changes of mdadm devices.
The daily warning is generated by /etc/cron.daily/mdadm. I could disable the daily warning by ...
2
votes
Mount Btrfs raid 5 from failed ReadyNas 104 on Linux - aka how I restore data from my ReadyNas
If I only knew it could be as simple - as I spend hours and hours trying to fix it.
I still don't know how to workaround compatibility flags, but downgrading to ubuntu 14.06 LTS did the trick, btrfs ...
Only top scored, non community-wiki answers of a minimum length are eligible
Related Tags
mdadm × 323raid × 198
linux × 180
software-raid × 105
ubuntu × 49
raid-1 × 39
raid-5 × 38
hard-drive × 33
partitioning × 19
debian × 16
lvm × 15
raid6 × 14
data-recovery × 12
raid-10 × 12
centos × 7
nas × 7
raid-0 × 7
boot × 6
btrfs × 6
lvm2 × 6
superblock × 6
backup × 5
arch-linux × 5
ext4 × 5
zfs × 5