2

I have a Samsung XE303C12 and I have been running Arch Linux ARM on it from an SD card. The partition table of the SD card is as such:

  1. A Chrome OS kernel partition to which the vboot-wrapped ALARM kernel occupies.
  2. An ext2 partition that I'd use for configuring U-Boot whenever it is installed to partition 1 instead of the Arch Linux kernel.
  3. The root filesystem that I installed Arch Linux to.

A recent attempt to update all packages at once ended up corrupting the root filesystem. I saw some message about the flashing the kernel to some partition on the SD card and filesystem failing to be mounted read-only or read-write or something. I was tried fixing it with fsck, which prompted me several times for what to do with inodes, but once I realized that it was probably going to ask me this for every single inode on the partition, I ran fsck -y /dev/mmcblk1p3. This ran through maybe several hundred inodes until stopped. I can't remember the error message.

In the hopes of preserving the data for future recovery, I am backing up /dev/mmcblk1p3 to a FAT32 filesystem on a USB drive using dd. Since FAT32 cannot hold files larger than 4 GiB, I decided to break it into segments using some shell code and loops.

Skipping ahead of a few things, I've realized that dd is faster in the beginning of the process (I used bs set to larger multiples of 512 to make it faster), so the first 64 MiB segment would be written to the USB filesystem in 3 seconds and it would get progressively slower with each iteration. I found out that this is because the disk cache fills up.

I looked for a way to flush the cache for dd and I stumbled upon this post on the Unix Stack Exchange website. The top answer says to do sync; echo 3 > /proc/sys/vm/drop_caches. A comment on that answer notes that that setting is not sticky and the link in that comment gave me the idea to do echo 3 > /proc/sys/vm/drop_caches before every iteration of dd. I tried that and dd still dropped off in copying speed.

The second solution mentioned in the first post's answer was to run dd with iflag=direct as a way to bypass the cache. I did that but I also used oflag=direct since I figured the cache would apply to both the copying from the SD card and the writing to the USB. This comment said that nocache should be used instead of direct, so I tried that as well. Both methods experienced the same drop from ~17 MB/s to ~1-3 MB/s.

I'm guessing that I might not be using those methods correctly, so is there anyway to reliably flush the cache every iteration to make dd faster or some way to just not use the cache at all?

5
  • What blocksize (bs=) are you using with dd? The Q mentions 512-bytes and "multiples"... And you're copying the Arch install on the sd card, to a hard drive I'm guessing?
    – Xen2050
    Commented Apr 19, 2017 at 1:47
  • @Xen2050 I have tried using block sizes that are 2^x bytes where x is an integer from 12 to 25. I'm copying the SD card partition to a USB drive.
    – Melab
    Commented Apr 19, 2017 at 1:57
  • 2
    For the most part, you will find that dd is starting off faster because it is able to cache the reads, but eventually it needs to start writing them to disk. Compressing the output on the fly before its written will save you some time if its an option - but the key thing (after you have increased the block size, as you have already worked out) is to get faster sd cards!
    – davidgo
    Commented Apr 19, 2017 at 5:16
  • a question re a *nix specific command, you'd be better off asking on unix.stackexchange.com or even perhaps ubuntu.stackexchange.com
    – barlop
    Commented Apr 12, 2020 at 3:57
  • What kind of SD card are you using? Usually, speed issues are related to using bad/generic (low quality chips) on the SD card itself. I would try to clone your SD card to something like a samsung SD card rated for 90mb/s.
    – cybernard
    Commented Aug 11, 2020 at 0:56

2 Answers 2

1

As long as the block size (bs) used with dd is big enough, it's really only limited by hardware speed. I suspect you'll just have to wait.

Now I need a calculator... can't find one, just bc. So you're using a bs= of 2 to the power of 25... so about 33 million, that's big enough (fyi, using a small bs like 1 or 512 will usually slow dd to a crawl).

It is possible, even likely, that the sd card &/or usb drive just won't read & write any faster. The initial "fast" write is probably just filling a write cache, and the real writing is always the same slow speed. Flushing the cache first would just give the illusion of faster writes for a little longer.

So your installed Arch on an SD card is completely corrupted, and the filesystem severely damaged, and you already ran fsck a few times on it... I'm guessing that repairing the install is nearly hopeless now, and a fresh install would be 1000x faster & easier.

If you can mount it & copy off any data worth keeping, why not do that now and forget about further recovery? Important data should always have a backup anyway, so there really shouldn't be much to do.

FYI, You could also compress the dd image before writing it, gz pipes nicely and would save a lot of space if there's a lot of zeros in free space, but that makes mounting the image later harder. Squashfs could image the whole drive too, for mountable read-only access later.

[Might as well put this in an "answer"]

0
0

I'm guessing that I might not be using those methods correctly, so is there anyway to reliably flush the cache every iteration to make dd faster or some way to just not use the cache at all?

If you use ddrescue you have the -d option to avoid the use of the cache at least on the input side.

-d, --direct use direct disc access for input file

1
  • I believe this option is oflag=direct in dd: direct use direct I/O for data
    – Pierre
    Commented Nov 10, 2022 at 8:13

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .