0

I started ddrescue to recover a failing 2TB WD drive from my NAS. Although I specified a logfile, unfortunately I was running ubuntu in trial mode from a bootable flash drive and did not write the logfile to a separately-mounted drive, so when the power went out around 11%, I discovered that I lost my logfile.

Under chapter 14 of the manual, the generate mode looked like it might work, so I ran:

ddrescue --generate-mode infile outfile mapfile

The drive I am recovering to is a brand new drive, so I was hopeful that it would work since there was no old data present on the drive.

I'm now running Ubuntu installed to an ssd, so I used the generated mapfile to run ddrescue again with:

ddrescue -f -n -r1 /dev/sda /dev/sdb /tmp/ddrescue.log

This appears to have worked for the most part, as the recovery process restarted around the 11.5% mark, right where it left off. My concern though is that before the crash, ddrescue had identified one read error, and had shown about 20MB of non-trimmed blocks, but on starting the process with the new mapfile, there were 0 read errors marked and 0 non-trimmed blocks. Now after running for 8 hours, it shows 2 errors and 29696B of non-trimmed blocks, but I'm assuming these are new read errors because ddrescue isn't looking at the old data portions that were already marked as rescued.

Will ddrescue discover the original read error on a subsequent pass, or is that one gone for good, and the only way to find and re-try those blocks would be to start everything over from scratch with a new mapfile?

I'm wanting to recover as much as possible from the old drive, if not everything, so I'm willing to start over if necessary.

Thanks for any help on this.

1 Answer 1

0

About --generate-mode:

Ddrescue can in some cases generate an approximate mapfile, from infile and the (partial) copy in outfile, that is almost as good as an exact mapfile. It makes this by simply assuming that sectors containing all zeros were not rescued.

(source)

In the first try, the sector of the target disk corresponding to the erroneous sector of the source disk was not written to. Similarly, sectors corresponding to whatever has been skipped were not written to. Assuming the target disk returns all zeros from sectors not-yet-written-to, the later ddrescue --generate-mode classified all these fragments as "not rescued".

--generate-mode cannot tell if a sector is full of zeros because it corresponds to a non-tried sector of a source disk, non-trimmed, non-scraped, or a bad-sector; or a healthy sector that has been copied but happened to contain all zeros. All it knows is if there's a sector full of zeros in outfile then possibly it hasn't been read, while maybe it can be read. --generate-mode simply classifies such sector as non-tried when creating an approximate mapfile, so a future ddrescue that actually uses the mapfile will try (or retry) to read the sector.

The ultimate ddrescue may re-read some sectors unnecessarily; sectors that have been read but happened to contain all zeros and therefore looked to --generate-mode like non-tried sectors. This is usually a minor inconvenience, if any.

You don't need to restart from scratch, unless there were non-zeroed sectors on the target disk when you started. Such sectors would make --generate-mode believe they are rescued data, even if they are not. You believe your target disk (as a brand new disk) contained zeros only. If your belief is right then you don't need to restart from scratch.

If your belief is right, the erroneous sector from the first try was classified by --generate-mode as "not rescued" and your ultimate (still running) ddrescue either tried or is about to try to read it again.

2
  • Thanks, that was my understanding of how --generate-mode works as well. The target drive was a brand new Seagate Ironwolf 4TB straight out of the retail box, never mounted or partitioned, just connected and immediately started ddrescue. Do you think the assumption that this new drive contained all zeros is likely true?
    – zardano
    Commented Feb 16, 2022 at 21:40
  • @zardano It is likely true. ATA Secure Erase results in the disk reading all zeros, so I would say this is a single special state. There is no other state that stands out in a similar way (e.g. there is no complementary command that results in the disk reading all bits as ones). Therefore I would expect a brand new disk to be exactly in this special state. But frankly I have never tested any new drive for this; and I have never heard of any requirement for brand new disks to be in this state. Still, if I encountered a "brand new" disk not in the zeroed state, I would find it suspicious. Commented Feb 16, 2022 at 22:06

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .