I have a home server with the following specifications:
- Lenovo TS140 Server
- IBM ServeRAID M5015 RAID Controller
- 4x WD Red 4TB Drives (WD40EFRX) in RAID 10
- 3WARE SFF-8087 Cable (CBL-SFF8087OCF-05M) connecting the drives to the RAID controller
Recently I had a drive fail (after about 8 months of use) so I RMA'd it, received the replacement drive and rebuilt the array successfully. 24 hours later, the controller reported that the replacement drive failed, so I submitted another RMA. I just received the new replacement and, as soon as I insert the drive or try to rebuild the array, the controller's alarm goes off and shows the new replacement drive as failed.
The odd thing is that, if I take the drive out of the server and throw it into my desktop computer, I'm able to format and use the drive without any issues. Running a program which can read S.M.A.R.T. data (CrystalDiskInfo), the drive shows as being in "Good" health.
Unfortunately I don't have too much experience with RAID, so I'm not entirely sure what the issue is here. Should I just try sending the drive back for another RMA? Could it be the SFF-8087 cable that's failing? Or is there anything else that I can do to get a bit more insight into what might be causing this issue?
Here's a screenshot showing the drive's SMART data:
I also ran Disk Self Tests (short and extended) in PassMark DiskCheckup and Western Digital Data LifeGuard Diagnostics. All tests passed: