0

I have an old laptop with a 500Gb non-removable (soldered) hard drive. I am attempting to recover some data from the drive. After some reading, I've discovered that I should create a copy of the drive so if something goes wrong, the drive is still intact. I've also read that dd is one of the best options to do this on Linux.

Here's the problem. I have another pc with tons of storage for the drive image, but no way to transfer it. All I have is a 60Gb USB drive.

So how do I create a drive image for the first ~60Gb, transfer it to my other pc, then repeat until my entire drive is backed up?

And once transfered to my pc, how do I put the drive images back together?

4
  • 2
    I would not try what you are trying. 1 TB USB Drives are inexpensive and will get your data in one go.
    – anon
    Commented Oct 5, 2022 at 19:27
  • 2
    Actually, ddrescue is the best tool because it is made for this very purpose: copying as much as possible from potentially damaged media. It does not do chunks. // Maybe a network transfer would be appropriate here. Is this feasible for you?
    – Daniel B
    Commented Oct 5, 2022 at 20:17
  • "I've also read that dd is one of the best options to do this on Linux" – It may be one's choice, still dd is a cranky tool which is hard to use correctly. You may be interested in copying a device sequentially, in chunks, with error recovery. Commented Oct 5, 2022 at 20:17
  • Gb is an unusual unit in this context. 500Gb is 500 gigabits i.e. 62.5 gigabytes. Do you mean 500 GB? Similarly for 60Gb: do you mean 60 GB? My point is: even otherwise flawless code won't work well if there's misunderstanding regarding units. Commented Oct 6, 2022 at 11:07

2 Answers 2

1

Although I agree with the above comment that an external drive of 1TB is cheap enough and will allow to backup in one go, here is how:

dd if=/dev/sdx of=chunk1 bs=1m count=60000 will create the first chunk

dd if=/dev/sdx of=chunk2 bs=1m count=60000 skip=60000 will create the second chunk

dd if=/dev/sdx of=chunk3 bs=1m count=60000 skip=120000 will create the third chunk

etc...

To recombine the chunks:

cat chunk1 chunk2 chunk3 [...] > wholedisk

Or, without chunking and without external drive, you could make a network share from your PC that has a ton of storage, and write directly to it from the PC your are backing up. In one go.

1
  • 1
    (1) m (in bs=1m) is not a portable suffix. E.g. GNU dd understands bs=1M (also not portable), but not bs=1m. (2) I think /dev/sdx is able to provide 1 MiB per read reliably, but in general you may get less and then your count and skip may desynchronize. In general use iflag=fullblock if supported, when relying on count. Commented Oct 5, 2022 at 20:37
0

An alternative solution could be DMDE. DMDE allows for even more direct access to the storage device using it's native interface. This allows for greater control over error recovery (compared to dd or even ddrescue).

  1. Create image, use split option
  2. Recover files from the disk image(s)

Split image in multiple parts

To recover files, open the .ini file DMDE created at the time of imaging, it will help re-assemble the image parts as if it were a JBOD array.

Combine image parts as JBOD

If you make image parts one by one, as you seem to be intending, you need to manually create the JBOD array, which would be a matter of adding image parts in the correct order. If you assign meaningful names to the image parts (part-01, part-02 etc.) this should be easy.

Imaging can be done using the free version of DMDE, this version also offers file recovery options: Free Edition includes all features but a single recovery operation recovers up to 4000 files in the current panel only (you should first open a subdirectory in the current panel and then recover files in the panel).

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .