12

My wife's Windows 7 (64-bit) box has suddenly developed a SMART "disk is bad" status. I'm attempting to copy everything off (no admonishments about lacking a backup regimen, please, I know already :( ) by creating a System Image across the network to a different machine, but it gets to a certain point and starts taking forever. Doing a chkdsk reveals that certain files cause this by having many bad blocks (like dozens of thousands in a row, if the event log is any indication) and causing the system to do its standard try-to-recover-and-relocate-upon-access thing.

But this is taking so long, I'm afraid the disk will fail completely before I can get the damned thing copied. However, several of the files so far have been ones that she has copies of elsewhere, so I am able to just delete them prior to retrying the backup to speed things up considerably.

So: is there some tool or procedure that will try reading each file, and upon hitting a bad block, just tell me about it and skip to the next file? So I can see which ones I can just dump and which I need to let it try to recover?

1
  • 3
    "is there some tool or procedure that will try reading each file, and upon hitting a bad block, just tell me about it and skip to the next file?" This is exactly what ddrescue does gnu.org/software/ddrescue you can run it from a linux live USB stick like system rescue cd. it will skip bad sectors and read everything it can first, then go back and retry the bad sectors repeatedly
    – endolith
    Commented Dec 14, 2015 at 16:44

7 Answers 7

8

As no one ever actually answered your question, the following not-exactly-lightning-fast method may be the quickest way to get what you are looking for.

  1. The utilities you will need work under Linux, so you first need to create a USB key or CD that you can use to boot into Linux (or to attach the disk to a Linux machine).

  2. You then need to run ddrescue from the Gnu ddrescue package. This will create a "mapfile", which is basically a list of the bad sectors on your disk. There are many different options to ddrescue, which among other things vary in how hard it works to read/recover data from a bad sector. If you want to consider any sector which gives trouble as "bad", and don't want to really recover anything with ddrescue, you can use the -n option and specify /dev/null as the target, and this will be pretty fast (ddrescue will just read once through all the sectors of the disk in order, and the mapfile output will contain a list of sectors where the read failed).

  3. You then need to run a utility called ddru_ntfsfindbad on the mapfile and the disk, and this will output what you want: a list of the files on the disk that have parts in one of the bad sectors.

NOTE however:

  1. If a drive is failing, reading it at all is very likely to make it fail worse. So it is quite possible (some would say "close to certain") that some/many/lots of sectors that were good before you read the disk twice via the procedure are now bad. The smart thing to do would be to have a good drive and do both steps above actually recovering data. IF you do this, of course, you might want to use ddrescue's ability to try extra hard to get the data off of hard-to-read sectors.

  2. ddru_ntfsfindbad's manual says that you CANNOT run it on the original bad drive UNLESS the file system is/was NTFS. So you're ok in your case, but it will almost certainly be faster if you run it on a ddrescue-recovered drive and not the original. And if the bad sectors are in certain filesystem metadata, you really will need to do this.

I realize that the original question is very old, but I had this problem recently and thought that others might want to know the answer to the original question.

2
  • Can this be run in the new WSL on Windows?
    – mFeinstein
    Commented Sep 13, 2019 at 3:33
  • lol, for operations like this, you want the safest approach possible in terms of software stability and potential bugs.
    – Klaidonis
    Commented Mar 25, 2020 at 12:22
3

When it comes to bad sectors on a disk, if there is no backup then what I do is get a backup image of it using a tool called Drive Snapshot:

  Drive Snapshot
  http://www.drivesnapshot.de/

When this tool encounters bad sectors, it keeps track of them in a separate text file (one bad sector per line, so you can simply count the number of lines in the file to determine the total number of bad sectors), which is also used as a cross-reference to find out which files used those sectors.

2
  • +1, After this is done run Spinrite as suggested by happy_soil, it may repair bad sectors or recover data and move it to good sectors, when spinrite is done make another image using Drive Snapshot.
    – Moab
    Commented Feb 18, 2011 at 16:31
  • 1
    This doesn't answer the question: How to get a list of files with bad sectors? A list of bad sectors is not the same as a list of file names.
    – Chloe
    Commented Nov 6, 2017 at 23:40
1

I had the same question and did some research: http://www.disktuna.com/finding-out-which-file-is-affected-by-a-bad-sector/.

I am assuming Windows OS and NTFS file system.

So, a bad sector can be part of:

  • Unallocated space. We can ignore this.

  • File system structures. Normally chkdsk should take care of this. It is possible that depending on where file system damage is that chkdsk won't run at all. In which case you'd run a surface scan on the hard disk itself.

  • System Files affected: You could use the Windows System File Checker (SFC.exe). At the command prompt, type the following command, and then press ENTER: sfc /scannow.

  • User data: The Microsoft support tool NFI.exe can be used to convert a LBA sector address to a file path. This way you can determine which files need to be restored from backup after sector reallocation.

    Example:

    C:\Users\admin\Downloads>nfi \Device\Harddisk0\DR0 28521816
    NTFS File Sector Information Utility.
    Copyright (C) Microsoft Corporation 1999. All rights reserved.
    
    
    ***Physical sector 28521816 (0x1b33558) is in file number 5766 on drive C.
    \IMAGES\win7HDD.vhd
    
  • The easiest way is probably HD Sentinel. After running a surface scan HD Sentinel will display a list of files affected by bad sectors.

4
  • Please quote the essential parts of the answer from the reference link(s), as the answer can become invalid if the linked page(s) change.
    – DavidPostill
    Commented Mar 9, 2017 at 13:56
  • Ok, will do. Stand by ;) Commented Mar 10, 2017 at 15:21
  • Huge drawback of HD Sentinel for that particular feature: it will actually try to access the defective sectors and display their contents when requested to display which files they belong to. Piriform's Defraggler is excellent for that purpose, in combination with the LBA values of bad sectors provided by HD Sentinel during its scan: point a block on the map and it lists which files are in that area. But the first step should be cloning/imaging, and as Scott Petrack mentioned, it's possible to get a list of files affected by bad sectors by using ddru_ntfsfindbad in combination with ddrescue.
    – GabrielB
    Commented May 23, 2019 at 20:55
  • nfi.exe does not have this issue, it gives its results by analysing the MFT and does not attempt to access the requested sectors, but there seems to be an issue with values beyond 2^31 or 2147483648. superuser.com/questions/1267334/… The native multi-purpose Windows tool fsutil doesn't seem to have that issue, and the output is more streamlined, but it's slower (can be a problem if there are many clusters to request). With nfi use sector numbers, but with fsutil use cluster number.
    – GabrielB
    Commented May 23, 2019 at 21:50
1

If you already have a list of bad sectors, the most convenient tool I found to determine which are the potentially affected files is Piriform's Defraggler. When clicking on a given block on the volume's map, it will display a list of the files contained in that same block (even non-fragmented files). And when clicking on a file name in the “File list” tab (which only displays fragmented files), it will highlight all the blocks containing at least one sector belonging to that file. Unfortunately there is no numerical indication of offset / sector / cluster intervals, and no way to directly type a particular offset / sector / cluster value. (I wrote the company about two years ago to request a few enhancements which would make this great feature more practically usable in that kind of situations, and had a kind reply, thanking me for my comments and suggestions; I haven't updated Defraggler in a while, perhaps some of my suggestions have been implemented since then.)

I provided some more methods here : How do I find if there are files at a specific bad sector?

nfi.exe X: [sector number]

– fsutil volume querycluster X: [cluster number]

With both of those command line tools it should be relatively easy to write a script so as to load each line of a list of sectors as input and get a list of files as output.

– HD Sentinel, but with a major caveat: it will actually try to access each requested sector and display its contents, which may temporarily freeze the system and worsen the drive's condition.

– R-Studio (same issue as HD Sentinel)

– WinHex (same issue as HD Sentinel)

0

This would depend on your desire for the data based on your desire to do things without cost.

I recently reviewed CBL's new data recovery software and although this drive is technically still running, one of the features it had that I found worthy of mention was the ability to select the number of retry attempts for bad sectors.

In a case like this you can set it to 3 retries instead of the default, 20 or 30 I think. By adjusting this down to 3 you should still catch all data on weak portions of the drive without wasting crucial time on files that may already be beyond software recovery. Then when you've captured that round go back and select only the files that failed in the first attempt and retry a few times gradually increasing the number of retries to 10, 20, 50 until you get everything or the drive goes completely flat lined.

Alternatively after the first pass you could try spinrite as suggested by happy_soil to see if it can refresh the bad sectors but get the bulk of the data off quickly first as this level of failure is often caused by failing heads, pre-amps or cache in the drive circuitry. f this is the case and the faults aren't in the media, every second of ru time counts.

CBL's software is a bit pricier than similar competitors at about $100, but the only commercial one I've seen with that much granularity in the controls and decent support available if you need some help getting the settings worked out.

0
0

run chkdsk /r in an elevated cmd prompt to locate bad clusters and recover readable info. This may improve your backup attempt reliability. Backup your files using simple copy afterwards. If this fails you can try data recovery methods that retry the read, however if chkdsk could not read the bad sector, you could repeat the chkdsk /r and try again. Multiple chkdsk /r and attempts to copy data is a good way to repeat attempts at bad sector data recovery. If chkdsk manages to read the bad sector once, it will write the data to a good block. Repeated chkdsk /r will continue to improve file integrity as long as the bad block can be read just once. If the data is gone, give up!

Once you have recovered data or given up and written it off, you could restore the drive to normal use again, but keep important data backed up somewhere else. It may be a good idea to copy all your data from the drive as much as possible then do a low level format followed by a repartitioning and slow/full format to allow bad sector re-allocation within Manufacturer system and NTFS bad sector list. Quick format won't mark bad sectors as bad.

chkdsk /b clears bad cluster list and rescans/updates bad cluster list. The bad cluster increase rate may remain stable and the NTFS bad cluster list should keep it under control, the drive may be safe to use again. Remember though that if all the hard drives factory allocated spare clusters have been mapped to bad clusters, then the drive can't map clusters in future and may be heading for eminent failure. Remember though that NTFS monitors bad clusters independently though, so this may not be the final end for the drive.

You may want to control and monitor future bad sector increments by periodically running chkdsk /b and monitoring for dangerous bad sector increment rates using sector scan software. If the drive shows signs that it's stable, it could continue working normally again for a long time.

If the drive continues to give you problems, drop it from a very high location, this will prevent you from wasting any more of your precious time on an obtuse hard drive.

2
  • This doesn't explain how to get the files with bad sectors.
    – Chloe
    Commented Nov 6, 2017 at 23:43
  • CHKDSK should NEVER be used on a HDD suspected of having physical issues. It only fixes logical issues as far as the NTFS filesystem is concerned. It might further increase the amount of damage without recovering a single byte in the process. See the reply by Scott Petrack for the best course of action in that kind of situation. As to regularly assessing the condition of storage devices on Windows systems, I highly recommand HD Sentinel, commercial software but worth every penny considering that it can warn of an impending disaster at the first signs of trouble, and thus prevent it.
    – GabrielB
    Commented May 23, 2019 at 20:37
-2

While SpinRite won't exactly do what you want, it will try to fix and recover data that are situated on bad sectors.

As per usual, your mileage may vary, but based from various user testimonials, it works as advertised. I use it personally to maintain my disks.

Check its documentation for further details.

1
  • 2
    DO NOT USE SPINRITE ON A FAILING HDD. Its ability to "refresh", let alone "fix", is dubious at best, and while it's running, which is highly stressful for an already defective drive, not a single byte of data is actually recovered. It may be used at the very end, once a full clone has been made with ddrescue / HDDSuperClone, to attempt to salvage some of the sectors that were skipped (but don't count on it, most likely it will just force them to be reallocated, hence the original data will be lost anyway). See Scott Petrack's reply for the best course of action in such a situation.
    – GabrielB
    Commented May 23, 2019 at 21:13

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .