All Questions
122
questions
5
votes
1
answer
538
views
Mount Btrfs raid 5 from failed ReadyNas 104 on Linux - aka how I restore data from my ReadyNas
After about 7 years my ReadyNas finally failed, I would expect from Neatgear detailed recovery instruction, but I read dozens of threads - my problem is not uncommon, but solution was not found.
After ...
5
votes
2
answers
4k
views
Does TRIM avoid the performance impact of mdadm RAID 1 on SSD?
It has already been mentioned in other questions that Red Hat recommends against using mdadm RAID 1 on SSD.
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use ...
4
votes
1
answer
4k
views
MDADM RAID-0 does not increased size after grow on AWS Linux with EBS
I wanted to grew a Raid0 from 2 disks to 3 (each 1TB EBS on Amazon AWS) but the size did not change thereafter. The RAID started with two disks sdc and sdd and the new one was sdf.
Here is the grow-...
4
votes
2
answers
6k
views
In mdadm's /proc/mdstat, what does [U_U] mean?
I just set up a raid 5 array for my media server. I set got 2 new disks, and set up an array with one missing drive, to copy the data from the old single drive to the new array, and later I will add ...
4
votes
1
answer
13k
views
Does mdadm record array event history?
I received an automated email from the mdadm daemon:
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sdh.
I find the wording strange. "could be"? ...
4
votes
1
answer
1k
views
HDDs occasionally "click" and dump this error
Every once in a while, my HDDs (6 1TB drives in a Linux md RAID-6 array) make a brief "click" sound, and I get this from dmesg. I don't really have any clue how to start diagnose this, or even how ...
4
votes
0
answers
3k
views
How to prevent an mdadm array check from consuming excessive system resources?
Once a month I've been doing
echo check > /sys/block/md2/md/sync_action
in order to force a software raid consistency check. The problem is this is a 7.2T RAID 5. The consistency check takes ...
4
votes
1
answer
758
views
Raid Spare Spin Down
I have a software RAID 5 array (mdadm under Debian Linux) that has been up for the better part of a decade and has seen it's fair share of drive failures. I've always been leery about setting up a "...
3
votes
1
answer
6k
views
XFS: Mount overrides sunit and swidth options
I have a 9TB XFS partition consisting of four 3TB disks in a RAID-5 array with a chunk size of 256KB, using MDADM.
When I created the partition, the optimal stripe unit and width values (64 and 192 ...
3
votes
4
answers
10k
views
MDADM reshape really slow
To sum up : My raid reshape with mdadm is really slow.
The complete story : 3 days ago I realized that one of my disks from a raid 5 array was faulty. I removed it and replaced it with a brand new ...
3
votes
1
answer
17k
views
Linux mdadm does not assemble array, but recreation of array does it
Maybe i'm not very clear in the title. When i'm trying to assemble my raid1 array with mdadm:
sudo mdadm --assemble /dev/md0 /dev/sdc /dev/sdd
It tells me that
mdadm: Cannot assemble mbr metadata ...
3
votes
0
answers
746
views
MD RAID 6 with XFS Periodic Kernel Panic -- Possible Kernel Bug?
This now appears to be a bug with the combination of MD and XFS after finding similar kernel panic reports elsewhere. A bug report with Ubuntu has been filed here ( https://bugs.launchpad.net/ubuntu/+...
3
votes
0
answers
3k
views
Synology DiskStation: How to stop/interrupt a running reshape (RAID5 -> RAID6)
The short of it: I have a running reshape from RAID5 with 5 disks to RAID6 with 6 disks which needs to be stopped, so I can power off the system. I do not care if the reshape needs to start fresh once ...
3
votes
0
answers
255
views
Linux Software RAID - Migrate from disk device to a partition
I have two 8TB disks configured as RAID1 using mdadm on Ubuntu 18.04 server. They way I created software RAID was on the whole device - /dev/sdb and /dev/sdc - instead of the partition such as /dev/...
3
votes
0
answers
185
views
Metadata of the assambled raid inconsistent with the metadata of the individual drives?
We have a raid 10 with two failed drives, with one drive from each set still functional.
When booting into rescue system the metadata seems to be fine and consistent with the expected state.
The ...