0

I have had persistent issues using externally mounted hard drive arrays under CentOS 7, whether attached through a USB3 connection or eSATA, where the data format used the single redundancy in a 5 drive array. The same drives and external enclosure have never had an issue when in stripe or mirror configuration.

Clarification for @Attie: By this I mean I have had a continually operating raid0 stripe on 5 identical drives in an identical enclosure attached to an identical interface card in the server. This amounted to a 5GB storage pool that to-date has had zero offline events or corruption. The redundant array is 4GB with 1GB parity. The mirror was two drives with a hot spare using firstly RAID1 then a zfs mirror.

I can work the stripe array full speed with never an issue. I can export the zfs pool from the redundant array and run, say, badblocks on each drive in that array simultaneously and see no sign of issue.

The symptom seems to be all drives dropping out simultaneously and the array controller (md or zfs) thinking there is catastrophic drive failure and not coping.

The common denominator is the data format, but the symptom does not seems to correspond to drive wear. The symptom instead suggests interconnect dropouts from drive box to server.

I have seen some suggestion of drives having a RAID limit, however I don't know how a drive knows it's in a RAID and what that changes about its limit. I have also seen some mention in forums of 'Ah I see you're using WD Reds', however have not been able to locate it again as a reference here.

Is there something intense about the way RAID5 and zraid1 work hard drives, to the point the single-cable interconnect would hard-reset or momentarily disconnect? Is there any workaround for this condition?

  • Drive specification: 1TB WD Red WD10EFRX SATA 3.5
  • eSATA drive box multiplexer chip: SiI 3726
  • eSATA server host chip: SiI 3132
  • USB3 drive box: ICY BOX IB-3810U3
    • Multiplexer chip: ASMedia ASM1074L
  • Server motherboard USB3 host: Gigabyte GA-B85-HD3 SKT 1150

Related questions: Externally attached ZFS pool hangs up, no sign of errors on drives

1 Answer 1

0

I've seen your other questions about this enclosure... this might not quite be "answer" standard, but it's too much for a comment.

Please can you clarify your comments:

  • Were you using a 5x disk stripe? (what was the stripe size?)
  • How many disks were involved in the mirror?
  • Have you tried a RAID10-type setup?

If you simply put 5x disks in the enclosure and try to access them concurrently, do you see similar issues?

For example, try the following (adjusting as appropriate):

for disk in /dev/sd{a,b,c,d,e}; do
    dd if=${disk} of=/dev/null bs=512 count=4G iflag=count_bytes &
done
wait

This will access all 5x disks at the same time, with 512 byte blocks, reading the first 4GiB.

How about writing to the individual disks?

1
  • Have updated the information as per your request, any more requests for information are welcome.
    – J Collins
    Commented Dec 7, 2017 at 13:15

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .