1

DD output image file is larger than the source partition and DD runs out of space on the target partition(where the image is created) despite it being larger than the source partition.

I am trying to copy a partition to a file on another partition on the same disk. The target partition is slightly larger than the input partition. Both are ext3 partitions.

Running from OpenSuse-Rescue LIVE CD. Yast shows the input partition (sdb1) is 62.5 GiB and the output one sdb2 is 62.85 GiB.

Thunar shows the input sdb1 is 65.9 GB and the output one sdb2 is 66.2 GB, while the output dd image file is being also 66.2 so obviously maxing out sdb2.

Here is the console:

(sdb1 was unmounted, tried dd few times)

linux:# dd if=/dev/sdb1 of=RR.image bs=4096

dd: error writing ‘RR.image’: No space left on device
16156459+0 records in
16156458+0 records out
66176851968 bytes (66 GB) copied, 2648.89 s, 25.0 MB/s

Additional info by request:

And again : I am seeing the difference in the source partition size sdb1 and the DD image file RR.image it created from it. That file resides on sdb2.


There is still something unclear here: I am RUNNING DD AS ROOT, so that reserved space is available to write into, correct? The target sdb2 is 62.85 GiB while the total bytes for the image as you said are about 61.63 GiB. Here is also the output of df and POSIXLY_CORRECT=1 df commands:

The system now is system-rescue-cd

root@sysresccd /root % df

Filesystem 1K-blocks Used Available Use% Mounted on
…
/dev/sdb1 64376668 7086884 56241208 12% /media/Data1
/dev/sdb2 64742212 64742212 0 100% /media/Data2
/dev/sdb3 5236728 4785720 451008 92% /usr/local

root@sysresccd /root % POSIXLY_CORRECT=1 df /dev/sdb1 
Filesystem     512B-blocks     Used Available Use% Mounted on
/dev/sdb1        128753336 14173768 112482416  12% /media/Data1

root@sysresccd /root % POSIXLY_CORRECT=1 df /dev/sdb2    
Filesystem     512B-blocks      Used Available Use% Mounted on
/dev/sdb2        129484424 129484424         0 100% /media/Data2

The numbers are exactly the same as in simple df if we divide it by 2. 1024b/512b=2 is the divisor.

  1. sdb1 is smaller than sdb2. The 100 percent usage on sdb2 now is because of the DD image file that filled the partition up. It has to be the only file on it now.

  2. The image file itself is 66,176,851,968 bytes as of DD (at run time) and Thunar reports. Divided by 1024 bytes we get 64625832 K-blocks correct? So it is still smaller than df reported for sdb2 by more than 116380K and it is LARGER THAN THE sdb1 (THE SOURCE), but it maxes out the partition sdb2.

The question is: what is in there to take that space on sdb2?


But most important and interesting is:

Why is the target file larger than the source partition that dd created it from? Which means to me: I can't write it back.

sdb1 (64376668K) < RR.image (64625832K)

And

sdb1 (64376668 1K-blocks) < RR.image (64625832 1K-blocks) < sdb2 (64742212 1K-blocks)

(I hope things were calculated right…)

Now I checked the blocks that are rerserved for ROOT. I found this command to execute:

root@sysresccd /root % dumpe2fs -h /dev/sdb1 2> /dev/null | awk -F ':' '{ if($1 == "Reserved block count") { rescnt=$2 } } { if($1 == "Block count") { blkcnt=$2 } } END { print "Reserved blocks: "(rescnt/blkcnt)*100"%" }'

Reserved blocks: 1.6%

root@sysresccd /root % dumpe2fs -h /dev/sdb2 2> /dev/null | awk -F ':' '{ if($1 == "Reserved block count") { rescnt=$2 } } { if($1 == "Block count") { blkcnt=$2 } } END { print "Reserved blocks: "(rescnt/blkcnt)*100"%" }'

Reserved blocks: 1.59999%

So the percentage reserved for ROOT is also the same on both partitions in case that matters.


Here is the output for gdisk:

root@sysresccd /root % gdisk -l /dev/sdb

GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. 
***************************************************************

Disk /dev/sdb: 312581808 sectors, 149.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): DCF8AFC4-11CA-46C5-AB7A-4818336EBCA3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 312581774
Partitions will be aligned on 2048-sector boundaries
Total free space is 7789 sectors (3.8 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048       131074047   62.5 GiB    8300  Linux filesystem
   2       131074048       262889471   62.9 GiB    8300  Linux filesystem
   3       302086144       312580095   5.0 GiB     0700  Microsoft basic data
   5       262891520       293771263   14.7 GiB    8300  Linux filesystem
   6       293773312       302086143   4.0 GiB     8200  Linux swap

So what is the real size of sdb1 then?

Isn't sdb2 (N2) larger than sdb1 (N1)? So WHY the image file GROWS LARGER than sdb2 (N2)? If I turn off the space reserved for root on sdb2, shall it fit there then?

2
  • If my answer doesn't convince you then please provide the exact numbers to compare. E.g. to tell how large your partitions are: sudo gdisk -l /dev/sdb; to tell how much space your target filesystem can provide: POSIXLY_CORRECT=1 df /mountpoint/where/sdb2/is/mounted/. Please edit the question and paste the outputs of these commands (after you merge the accounts, if necessary). I've seen your(?) proposed edits and I think you're still comparing wrong numbers. 64376668KiB for sdb1 is irrelevant; the partition is larger than that. Commented Aug 14, 2017 at 8:41
  • You're welcome. Full dd output will match the partition size, df operates on filesystems. 64376668KiB is the space available on the filesystem within sdb1, the partition is larger for sure. Once again: what is the output of gdisk -l /dev/sdb? You can read the partition size from it, this is what matters. Commented Aug 14, 2017 at 15:35

2 Answers 2

2

Every filesystem needs some space for metadata. Additionally ext family reserves some space for root user and it's 5% by default.

Example

In my Kubuntu I created a (sparse) file of 1GiB:

truncate -s 1G myfile

and made ext3 filesystem within it. The command was plain

mkfs.ext3 myfile

This instantly allocated about 49MiB (~5% in this case) to the myfile. I could see that because the file was sparse and initially reported 0B usage on my real disk, then it grew. I assume this is where metadata lives.

I mounted the filesystem; df -h reported 976MiB of total space, but only 925MiB available. This means another ~5% wasn't available to me.

Then I filled up this space (after cd to the mountpoint) with

dd if=/dev/urandom of=placeholder

As a regular user I was able to take 925MiB only. The reported "disk" usage was then 100%. However, doing the same as a root, I could write 976MiB to the file. When the file grew over 925MiB the usage remained at 100%.

Conclusion

Comparing sizes of your partitions is wrong in this case; so is comparing the sizes of your filesystems. You should have checked the available space on the target filesystem (e.g. with df) and compare it to the size of the source partition.


EDIT:

To make it clear: your 66176851968 bytes are about 61.63 GiB. This is not larger than the source partition which is 62.5 GiB. The source partition was not fully read when the target filesystem got full.

In case you're not familiar with GB/GiB distinction, read man 7 units.


EDIT 2

Now we have all the actual numbers. Let's stick to the unit of 512B, it's a common sector size.

  • Your sdb1 partition occupies 131074048-2048=131072000 units on the disk. Let's call this P1. This is from gdisk output.
  • Your sdb2 partition occupies 262889472-131074048=131815424 units on the disk. Let it be P2. This is also from gdisk output.
  • Your filesystem inside sdb1 can store files up to 128753336 units total. Let's call this number F1. This is from df output.
  • Your filesystem inside sdb2 can store up to 129484424 units. Let it be F2. This is also from df output.

The difference between P1 and F1, as well as the difference between P2 and F2, can be explained if you know there must be a room for metadata. This is mentioned earlier in this answer.

Your dd tried to copy the whole sdb1 partition, i.e. P1 of data, into a file that takes space provided by the filesystem inside sdb2, i.e. F2 of available space.

P1 > F2 – this is the final answer. Your image file didn't grow larger than it should. It looks to me you expected its size to be F1. In fact the whole image would have a size of P1 units.

P2 and F1 are irrelevant in this context.

2
  • Just found ths second EDIT now. Earlier did my own investigation based on your earlier posts,which I posted as an answer to myself. Sorry for that. And thank you very very much! Hope I wrote my answer correctly.
    – Michael P
    Commented Aug 14, 2017 at 21:25
  • Hope I understood things right, can you correct me if not?
    – Michael P
    Commented Aug 14, 2017 at 21:35
0

After that long discussion I realized what you mean.

We got to the point finally . Well my question was kind of obscure initialy before I edited it. Really thank you!

I found this command to get the exact size of partitions in bytes:

root@sysresccd /root % parted /dev/sdb unit B p

Model: ATA WDC WD1600AAJS-0 (scsi)

Disk /dev/sdb: 160041885696B

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Disk Flags:

Number Start End Size Type File system Flags

1 1048576B 67109912575B 67108864000B primary ext3 boot

2 67109912576B 134599409663B 67489497088B primary ext3

4 134600457216B 154668105727B 20067648512B extended

5 134600458240B 150410887167B 15810428928B logical ext4

6 150411935744B 154668105727B 4256169984B logical linux-swap(v1)

3 154668105728B 160041009151B 5372903424B primary fat32 lba

So basicly I need to compare the real size of sdb1 (N1) in this list to the available space on sdb2 (N2) in this list.

But for that we use the POSIXLY_CORRECT=1 df command on the target (sdb2) filesystem which was: 129484424 512b-blocks in this case.

If we divide 67108864000B from sdb1 by 512 b = 131072000 512b-blocks . Or we can multiply 129484424*512 = 66296025088 Bytes.

So 66296025088 bytes (available space on sdb2) < 67108864000 bytes (raw size of sdb1). Clearly that sdb1 partition image cannot fit into the available space on sdb2. And there is a space reserved for ROOT on sdb2 that also should be taken into account.

And as of my question about the image file larger than the partition I basicly compared the sdb1 filesystem size to the DD image instead of the raw partition size which shall be read in full by DD. Correct? I can even approximate how much space I need for the operation to complete: 66,176,851,968 bytes was the size of the incompleted DD image so I compare it to the size of raw sdb1 partiton 66,176,851,968 = 66176851968 B < 67108864000 B = smaller by 932012032 Bytes = 888 MiB

But hey what is in there on empty partition ? Metadata and space reserved for root? So much space ???!!!!! Thank you very much!!

Good to know all this!!

4
  • Formal note: this site is not a forum, its Q&A. In an answer one shouldn't directly address anyone but the asker. You answer yourself (and it's OK to do so), not me; the answer should reflect this. Comments are for (reasonable) discussions. Commented Aug 14, 2017 at 21:40
  • I understand now, thanks! Thank you indeed for the help and the patience!
    – Michael P
    Commented Aug 14, 2017 at 21:44
  • "So 66296025088 bytes (available space on sdb2) < 67108864000 bytes (raw size of sdb1). Clearly that sdb1 partition image cannot fit into the available space on sdb2." Confirmed, these numbers seem correct to me. This is the main issue here and I'm glad we finally worked this out. Commented Aug 14, 2017 at 21:57
  • I`am glad too !
    – Michael P
    Commented Aug 14, 2017 at 22:32

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .