Skip to main content

Questions tagged [raid6]

The tag has no usage guidance.

0 votes
1 answer
45 views

resize2fs wont grow an mdadm array (raid6, lvm, ext4)

I've decided to grow my raid6 array: - from: 7x 2TB drives (giving usable space of ~9TB in ext4/raid6) - to: 7x 4TB drives (giving usable space of ~18TB in ext4/raid6) I have replaced all seven ...
Bob Arezina's user avatar
0 votes
1 answer
73 views

Activate inactive RAID6

Ubuntu 22.04.3 LTS I have mdadm RAID6 with 4 drives. This is a minimal test configuration before extending it to target 8 drives. RAID was in working state: cat /proc/mdstat Personalities : [linear] [...
Andriy's user avatar
  • 101
1 vote
0 answers
37 views

Assemble raid6 array from spares only

This question is related to Managing raid6 device with pacemaker, but nothing in the question linked is relevant to this question, except the fact that Pacemaker is involved. For the purpose of this ...
Nykau's user avatar
  • 63
2 votes
1 answer
94 views

Managing raid6 device with pacemaker

I am setting up 4 hosts, each exporting one local storage device over iscsi with target. Every other hosts imports it such that each host has concurrent access to all 4 each storage devices. I built a ...
Nykau's user avatar
  • 63
0 votes
0 answers
360 views

Cannot stop md raid6 array

I setup a raid6 array on top of iscsi and lvm shared logical volumes that I use to make sure only one node part of iscsi network can use the raid volume at a time. Today I fail to stop the volume. ...
Nykau's user avatar
  • 63
2 votes
0 answers
260 views

Raid6 mdadm reshape operation interrupted - now cannot mount or examine

Edit: Once the reshape finished the drive became fully accessible again. I had a power failure while a raid6 array was being reshaped, and now certain operations cannot be run against it, including ...
Jonathan Cremin's user avatar
0 votes
1 answer
435 views

Why MDADM removes disk with no errors from array and marks it as failed

I have a Raid6 that keeps automatically marking a disk (/dev/sdi) as Failed and removing it from the array even though the long test does not show any errors: Test results sudo smartctl -l selftest /...
Jorge's user avatar
  • 121
1 vote
0 answers
467 views

Growing RAID6 (mdadm) including update of stripe-width

I am running Debian Bullseye (OpenMediaVault 6) with backports kernel 6.0.3-1. I have set up a RAID6 array consisting of 4x 6TB Seagate Ironwolf HDDs (4KiB sectors with 512KiB emulation). When I have ...
bash0r1988's user avatar
0 votes
0 answers
116 views

Recovery of scp overwritten file

I had two 2GB txt files a.txt and b.txt that I wanted to copy to another directory using scp. However I accidently didn't type the destination directory and typed the following commands scp *.txt and ...
asalimih's user avatar
  • 101
0 votes
1 answer
83 views

Semi-failed RAID6, still able to copy data but incredibly slow

I had a 5-drive software raid6 with mdadm setup (2 parity drives), and a drive failed. I ordered a replacement, and when I powered off the machine to swap the failed drive with a new one, ANOTHER ...
Russ's user avatar
  • 1
2 votes
0 answers
241 views

Up to which point (disk size) is Raid 6 safe to use? [closed]

I have been reading a lot of warnings about Raid-6 becoming less and less safe to use due to the storage amount per disk as well as array sizes increasing. What are the best practice limits/when does ...
user2693017's user avatar
0 votes
1 answer
46 views

I added 5 drives to mdadm raid array, then I noticed 3 had partitions on them already. What is going to happen?

I have a 15 drive raid6 array using MDADM on ubuntu linux 18.04.2 server. I installed 5 more drives and added them to the raid array, and it started reshaping. However, after that I noticed that three ...
Matthew Weigand's user avatar
2 votes
1 answer
3k views

mdadm shows inactive raid0 array instead of degraded raid6 after disk failure

I've been running an Ubuntu 18.04 system with an 8-disk raid 6 array, that crashed while having a faulty disk (I only noticed that there was a faulty disk after the crash). The raid array has survived ...
D.F.'s user avatar
  • 21
1 vote
0 answers
696 views

Strange issue with Dell PowerEdge R610 hard drive status lights

I setup a test FreeNAS server the other day on a spare Dell PowerEdge r610 and noticed that the hard drive activity LEDs are always both lit no matter what the server is doing. I have 6 1TB 7.2k SAS ...
Richie086's user avatar
  • 5,221
0 votes
1 answer
107 views

Btrfs volume missing

I have an Ubuntu 64-bit Server with 7 4TB HDD's in a Btrfs Raid 6. A few days ago I executed the btrfs balance command but forgot that this takes quite a while. 'Cause of an update, I restarted my ...
hewu's user avatar
  • 33

15 30 50 per page