1

I've got a home server project I've been working on for a while now. It is a web and file-storage/backup server. It runs on Debian 8. I've been thinking about upgrading my storage options as I'm running out of space at approx. 5TB.

What I would like to be able to do is set up a ZFS filesystem starting out with 4 x 2TB HHDs. (6TB storage and 2TB parity) I was looking into buying the Icy Dock MB455SPF-B and noticed that the LEDs support a red "HDD fail" signal. In the fine print it says:

*the hard drive fail signal is provided by external host such as a RAID controller card or a motherboard. Please make sure your controller card / motherboard can provide hard drive fail signal via voltage pin in order to use this function.

I'd like to be able to use this feature to help eliminate any mistakes if I need to replace a drive. But I'm unsure about support for this "fail signal" I didn't see it listed in any mobo specs I've looked at, or sata controllers. Is it a common feature? or is it usually only something raid cards have? If my mobo doesn't have it. can I buy a sata controller card that does? (I don't want a full-on raid card) Also, I have to assume that the ZFS software needs to support it as well? is that a standardized sort of thing?

thanks for the knowledge!

Current Setup / Explanations:

  • Mobo: intel D925XBC w/ intel p4 single core (yes old)
  • HHDs: 2 x 1TB, 1 x 3TB
  • Looking to upgrade everything, as obviously it's pretty old. I was thinking of just starting with the HDDs for now. Probably still get old, used stuff as it doesn't do anything too intense.
  • ZFS because it's a fairly large volume, so the expandability of raid is nice. Most of the data is for long-term storage, so I don't want to worry about any slow and silent data corruption happening while I'm not looking at it. And a parity is nice too.
  • soft-raid because.. well I think ZFS only comes as software raid right? also because it doesn't need crazy performance and I don't want to deal with having to find the exact same raid controller if it dies.

EDIT:

So I'm starting to get the impression that this is pretty much just a server thing. and that enthusiasts don't bother. Which seems odd to me, I would have thought more blinky lights would be right up most peoples ally.

After continued reading, it seems like the area of "failure LEDs" is somewhat varied and non-standard.

From what I can tell, the only sure fire way is to buy an actual rack mount server, with a RAID card that's made to talk to the HDD backplane. That way everything is controlled through the RAID card, it does all the failure LED stuff.

If your building your own unit, with software raid. There are things like SGPIO that should work, but it seems to go by several names, SFF-8485, IPASS, and might be a part of mini-sas? nothing seems totally clear. Also I think SES-2 can do it? How the Icy Dock enclosure I noted in my question does it is beyond me, there's basically NO documentation. There seems to be some utilities like ledmon which can use an SGPIO enabled HBA to control the LEDs, but you're going to be doing it manually, or by use of a home made script.

So as long as you can find an HBA which has SGPIO (mostly server units, not so hard to find, but expensive) and a HDD enclosure that also uses SGPIO (also mostly server stuff, much harder to find because usually this is just the front of the server rack) And you can make yourself a good reliable script that checks the state of your HDDs and updates the LEDs accordingly. you should be all set

2
  • 1
    The low-tech approach is to label each disk with an identifier (visible before you unplug any cabling), and use that identifier in the pool definition.
    – user
    Commented Aug 17, 2016 at 14:14
  • I think a part of the reason why this question hasn't got a lot of attention is that you have a lot of superfluous content in it that is unrelated to the actual question you are asking. I recommend pruning heavily as well as highlighting the specific question to make it easier to spot. Also, one question per post, no more. If you are having trouble coming up with a good summary of your question for use in the title, how are we going to be able to tell what you need to know?
    – user
    Commented Aug 17, 2016 at 14:14

1 Answer 1

0

Your assumptions are correct, this feature is not really sought after in the consumer market. Most people have 2 - 6 disks at most, so labeling them as Michael suggested is pretty simple, cheap and easy. After all, you will probably not get 2 dead disks each hour, but maybe one dead disk in 3 years. Also, LEDs might annoy you if your server is in your living room or bedroom.

On the other hand, imagine you have 50 racks, each with 10 systems, each system having 24 disks, 12000 disks in total. You might have to change several disks each day. It becomes much more important to be able to go to the rack and quickly identify the correct disk. Reading 240 serial numbers would quickly make you insane, while removing the wrong disk by accident would reign hell down on you. So what you do is you use the lights to quickly locate the candidate disk and then read and verify the serial number on the label, comparing it with your error report information (because every software can have bugs).

Also the definition of "faulty drive" is not the same for all people and situations. An empty bay (no connection to drive) could be a serious connection fault or just routine maintenance of your raidz3 array. A functioning disk returning some read errors could be a sign of a critical condition and need of immediate replacement, or nothing to bother about if it is below a certain threshold in a set amount of time.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .