I had a degraded disk on a ZFS volume in my FreeNAS server [build 9.10.2-U1 (86c7ef5)] and before trying to replace it, I rebooted the server.
What does the following mean and do I have an issue with that disk?
- At startup, I get the following even though all disks are back online in volume status:
- During the scrub operation, a new alert showed the disk in a degraded state, with 670 checksum (unsure what that means):
- Scrub results:
The scrub operation is now finished. Here are the final results: state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 66.7M in 16h55m with 0 errors on Sat Jan 2 13:32:13 2021 config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 gptid/e0ef3f08-70b6-11e6-b8eb-1c98ec0f2cd4 ONLINE 0 0 0 gptid/e1b21671-70b6-11e6-b8eb-1c98ec0f2cd4 DEGRADED 0 0 1.29K too many errors gptid/e2841c02-70b6-11e6-b8eb-1c98ec0f2cd4 ONLINE 0 0 0 gptid/e3717f0c-70b6-11e6-b8eb-1c98ec0f2cd4 ONLINE 0 0 0 errors: No known data errors
smartctl -a
:SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 39365 172825824 # 2 Extended offline Completed: read failure 90% 39365 172825825 # 3 Short offline Completed without error 00% 39364 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
smartctl -a
against the device. I don't think this is exposed in the FreeNAS GUI anywhere. This will show the drive's own health monitoring statistics.