I have a machine with a zfs pool on three magnetic drives /dev/sda, /dev/sdb, /dev/sdd containing larger data and a SSD with ext4 on /dev/sdc containing the rest of system (including home directories, logs, ...). I set up power saving for the drives using hdparm -S 240
to spin down after 20 minutes, but after about 8 hours of inactivity the drives sleep like this:
# hdparm -C /dev/sd[abd]
/dev/sda:
drive state is: active/idle
/dev/sdb:
drive state is: standby
/dev/sdd:
drive state is: standby
Does anyone have an idea why can this happen? I was thinking that all access in zfs pool would be more or less uniform over the drives in pool. How can I diagnose the cause of this problem?
Edit: I've tried enabling block dump (echo 1 > /proc/sys/vm/block_dump
) for an hour and then looking into the log file. There was no access to any drive in the pool, but sda is still not in standby.