0

Is e2fsck -cc equivalent to badblocks -nsv -o badblocks.txt; e2fsck -L badblocks.txt?

If so, could I potentially speed up the testing by getting badblocks to do a destructive read-write test? Like badblocks -wsv -o badblocks.txt; e2fsck -l badblocks.txt? Could I utilise badblocks last-block and first-block parameters do do this check in sections, using e2fsck -L badblocks.txt to append the blocks instead of overwrite them? What are the pros/cons and feasibility of each approach?

1 Answer 1

1

First of all, you really don't want to do a destructive test. Yes, it will be faster, but as the name implies, it is destructive to the data on the file system, which is generally considered desirable by those who want to you know, keep their data.

Also, for modern storage disks, using badblocks is really not necessary. Back ion the dark ages, when ATA and IDE disks roamed the earth, bad block handling had to be done manually. But the last IDE disks were manufactured in 2013 --- a full decade ago, which is a long time in the computer industry where two years == infinity. With modern storage devices, HDD and SSD have a spare pool of replacement sectors, and when a storage device detects that as part of its media has gone bad, it will automatically redirect reads and writes to the bad sector to one of the spare sectors. You can query the disk using SMART to see if the spare sector pool has gotten depleted; and if the SMART health statistics show the disk is ready for hospice care, the wise system administrator will obtain a replacement disk, and replace it before you lose all of the data on said disk.

So in 2024, my best advice to not bother fooling with e2fsck -cc or the badblocks command. It's just a waste of time, and you will very likely actually make things worse by trying to deal with the badblocks list. 15 or 20 years ago, it made sense. No longer.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .