Skip to main content
added 26 characters in body
Source Link
sawdust
  • 17.9k
  • 2
  • 37
  • 49

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors (assuming no wrap-around), but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Note that the SMART report indicates the current value of 98% for Percent Lifetime Used indicates that only 2% of the expected lifetime has been used. The raw value of 2 indicates that neither of the two salient end-of-life indicators (average block wear and available spare blocks) are problematic.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Note that the SMART report indicates the current value of 98% for Percent Lifetime Used indicates that only 2% of the expected lifetime has been used. The raw value of 2 indicates that neither of the two salient end-of-life indicators (average block wear and available spare blocks) are problematic.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use.

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors (assuming no wrap-around), but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Note that the SMART report indicates the current value of 98% for Percent Lifetime Used indicates that only 2% of the expected lifetime has been used. The raw value of 2 indicates that neither of the two salient end-of-life indicators (average block wear and available spare blocks) are problematic.

added 417 characters in body
Source Link
sawdust
  • 17.9k
  • 2
  • 37
  • 49

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Note that the SMART report indicates the current value of 98% for Percent Lifetime Used indicates that only 2% of the expected lifetime has been used. The raw value of 2 indicates that neither of the two salient end-of-life indicators (average block wear and available spare blocks) are problematic.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use.

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use. (Note that the SMART report indicates that 9 blocks have been retired so far during this drive's lifetime.)

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Note that the SMART report indicates the current value of 98% for Percent Lifetime Used indicates that only 2% of the expected lifetime has been used. The raw value of 2 indicates that neither of the two salient end-of-life indicators (average block wear and available spare blocks) are problematic.

added 173 characters in body
Source Link
sawdust
  • 17.9k
  • 2
  • 37
  • 49

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
YouBut you could also try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.
You could also try to generate read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.

Why exactly is a "Raw Read Error Rate" of 1 considered bad? Isn't it the lower the read error rate, the better the reads (and the less the errors)?

Your research has found that this Raw Read Error Rate is derived from the "total number of correctable and uncorrectable ECC error events". The number is normalized and treated as a percentage, so the current value represents 1%, i.e. 1% of read operations have had an issue.

Modern NAND chips explicitly mandate ECC capability because occasional bit errors on read can occur during normal operation. The requirement will specify a permissible number of bits that might be in error per NAND page read, and need correction.

In other words a read operation may occasionally incur correctable errors, and therefore this is not an indicator of pending failure.
The occurrence of uncorrectable read errors could be problematic. In theory a sector/page/block that (consistently) generates uncorrectable read errors should be identified by the integrated drive controller, marked as a bad block, and retired from use.

The Raw Read Error Rate is not as significant as the number of uncorrectable read errors (which is now available in the SMART report that you appended).
The number of uncorrectable read errors seems to be indicated in Reported Uncorrectable Errors as 0x1B3 or 435.
Compared to the total read errors of 0x1C9 or 457, that would indicate that there were only 22 (benign) correctable read errors, but 95% of that total are the concerning uncorrectable read errors.

Does this mean my SSD is about to fail imminently? It has been working fine since I bought my laptop years ago...

If you think that the drive is "working fine", then that could mean that the drive was able to recover from those errors by retrying successfully and/or remapping was successful.
At the very least you could backup your data from that drive, and regularity monitor the SMART report for changes.

With almost 20,000 hours of use, there's no way to determine when these errors occurred.
But you could try to generate fresh read errors by scanning the entire drive, either using the SMART long/extended test or using a Linux command such as sudo dd if=/dev/sdX of=/dev/null. The first test is a lot faster but would only increment the SMART statistics, whereas the later test could also abort on a read error and thus provide a LBA of a problem area.
If you do not encounter more read errors, then that could be reassuring.

added 3 characters in body
Source Link
sawdust
  • 17.9k
  • 2
  • 37
  • 49
Loading
Source Link
sawdust
  • 17.9k
  • 2
  • 37
  • 49
Loading