In order to verify the integrity and restore the magnetic strength of data stored on disks I use to archive data (intended to last 30 years or more), I want to read and re-write every block of data on the drive every year or two. Some are HFS+ and some are NFTS. This answer suggests a utility that will do that when run from a Windows machine, but I don't have a Windows machine handy, and even if I did, I don't think the Windows utility will work with HFS+ disks.
I want to make sure that I am refreshing important "hidden" data like the partition map itself, so I'm looking for a procedure that I can run on a Mac that will simply treat the disk like raw block storage and just read and re-write each block on the disk, but at the same time provide enough information to call out which files are damaged if it encounters a read or write error. (Since I have 2 archive copies of everything, I hope I can recover a bad file on one archive with a good file from the other archive.)
I can think of a bunch of ways to read all the data on the disk if I can get the Mac to mount it as a raw drive, but no satisfactory way to write the data back to the same block or to identify which file a bad block belongs to.
A solution that re-writes the data would still be helpful even if it cannot flag which file is corrupted if a bad block is found. If you know of a solution that works only on Linux or Windows, I'd like to hear about it as long as it can handle both HFS+ and NTFS drives. Also, if you know of a utility that can determine which file a bad block is part of, given a raw block ID, that would be useful, too, as half of a two-part solution.