I've got a home server project I've been working on for a while now. It is a web and file-storage/backup server. It runs on Debian 8. I've been thinking about upgrading my storage options as I'm running out of space at approx. 5TB.
What I would like to be able to do is set up a ZFS filesystem starting out with 4 x 2TB HHDs. (6TB storage and 2TB parity) I was looking into buying the Icy Dock MB455SPF-B and noticed that the LEDs support a red "HDD fail" signal. In the fine print it says:
*the hard drive fail signal is provided by external host such as a RAID controller card or a motherboard. Please make sure your controller card / motherboard can provide hard drive fail signal via voltage pin in order to use this function.
I'd like to be able to use this feature to help eliminate any mistakes if I need to replace a drive. But I'm unsure about support for this "fail signal" I didn't see it listed in any mobo specs I've looked at, or sata controllers. Is it a common feature? or is it usually only something raid cards have? If my mobo doesn't have it. can I buy a sata controller card that does? (I don't want a full-on raid card) Also, I have to assume that the ZFS software needs to support it as well? is that a standardized sort of thing?
thanks for the knowledge!
Current Setup / Explanations:
- Mobo: intel D925XBC w/ intel p4 single core (yes old)
- HHDs: 2 x 1TB, 1 x 3TB
- Looking to upgrade everything, as obviously it's pretty old. I was thinking of just starting with the HDDs for now. Probably still get old, used stuff as it doesn't do anything too intense.
- ZFS because it's a fairly large volume, so the expandability of raid is nice. Most of the data is for long-term storage, so I don't want to worry about any slow and silent data corruption happening while I'm not looking at it. And a parity is nice too.
- soft-raid because.. well I think ZFS only comes as software raid right? also because it doesn't need crazy performance and I don't want to deal with having to find the exact same raid controller if it dies.
EDIT:
So I'm starting to get the impression that this is pretty much just a server thing. and that enthusiasts don't bother. Which seems odd to me, I would have thought more blinky lights would be right up most peoples ally.
After continued reading, it seems like the area of "failure LEDs" is somewhat varied and non-standard.
From what I can tell, the only sure fire way is to buy an actual rack mount server, with a RAID card that's made to talk to the HDD backplane. That way everything is controlled through the RAID card, it does all the failure LED stuff.
If your building your own unit, with software raid. There are things like SGPIO that should work, but it seems to go by several names, SFF-8485, IPASS, and might be a part of mini-sas? nothing seems totally clear. Also I think SES-2 can do it? How the Icy Dock enclosure I noted in my question does it is beyond me, there's basically NO documentation. There seems to be some utilities like ledmon which can use an SGPIO enabled HBA to control the LEDs, but you're going to be doing it manually, or by use of a home made script.
So as long as you can find an HBA which has SGPIO (mostly server units, not so hard to find, but expensive) and a HDD enclosure that also uses SGPIO (also mostly server stuff, much harder to find because usually this is just the front of the server rack) And you can make yourself a good reliable script that checks the state of your HDDs and updates the LEDs accordingly. you should be all set