0

I have a WD My Cloud EX2 Ultra NAS, almost 6 years old, and got a red light on it recently, indicating a disk problem. The NAS web dashboard reported one of the disks is bad, so I bought a replacement - brand new WD Red disk from a reputable stockist. The bad disk was a 6TB WD WD60EFRX disk and I replaced it with a 6TB WD WD60EFPX disk. The NAS is set up as a RAID 1 array of two 6TB disks.

I powered off the NAS, removed the old disk and inserted the new one, powered it back on and the dashboard said: "RAID Health: Rebuilding. Volume_1 is rebuilding." The rebuild process took almost exactly 14 hours. Immediately after it was completed, the dashboard reported: "RAID Health: Degraded. One or more RAID volumes are degraded."

The NAS dispatched an email to me, which said: "Volume 1 is degraded. Check the system's drive status by running Disk Test in Settings / Utilities." So I ran a "Quick Test" (other option being a "Full Test"), and the result was:

Disk1 Passed - Quick disk test completed successfully.
Disk2 Passed - Quick disk test completed successfully.

At this point I decided to just reboot the NAS, after which it seems the NAS decided to automatically restart the RAID rebuild process. I left it running again, and 14.5 hours later, there was the same outcome - RAID Health: Degraded.

I'd like to understand exactly what the problem is, and what I should do.

3
  • How do the SMART stats look on the disks? I believe the Red's have 3-year warranties, so worth checking. I'd expect a WD NAS to properly read SMART data for WD disks, but I don't generally trust the "Status: OK" summaries of the health stats, unless I can see the data to confirm that. The message does seem to indicate an issue with the single disk rather than the array. Commented Feb 6 at 22:15
  • Hi there @FrankThomas. Any idea how to access/run a test that would provide me with SMART data for a My Cloud NAS device? I'm wary about doing anything that could have negative implications, e.g. I don't want to keep pulling the disk from the enclosure, prompting the NAS to do more and more RAID rebuilds.
    – osullic
    Commented Feb 6 at 23:59
  • totally with you, though I know next to nothing about WD NASes. I do believe however that if you shut down the NAS, pull the disk and connect it to another system, and don't power the NAS back up until both disks are present once more. that shouldn't trigger a rebuild. I suggest linux just because I know it won't try to do anything to the disk unless you tell it to. Is the array now stable (eg if you reboot the NAS it doesn't start performing a rebuild on boot)? Commented Feb 7 at 0:25

1 Answer 1

0

Not much of an answer, but here's the explanation anyway - in case anyone else finds themselves in the same situation as I was in...

Initially only one of the disks was showing a red light and being reported as bad, so I replaced it. The RAID volume was being reported as degraded, so I got in touch with WD Support, and after much back and forth, it seemed the only option was going to be to recreate the RAID array and format the disks. But at this point, the NAS started to report that the second disk was also (suspiciously) bad. So basically, the solution in my case was to also replace the second disk. While I already had a backup in any case, I was still able to copy all the data off the NAS volume onto a portable HDD before replacing the second disk, and then I basically just started from scratch, so to speak, creating a RAID volume with the two brand new disks.

I’m going to investigate the SMART data on the “bad” disks, and if suitable, I might resell them - with full disclosure of course.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .