1

A couple of weeks ago I bought a "2Tb" thumb drive of uncertain origin, with the intention of using this as intermediate storage when rebuilding a system (three releases forwards of Debian).

Can anybody suggest an efficient way of verifying the actual size of this, i.e. that it actually has "2Tb" of Flash rather than a single "500Mb" device repeating in the storage space?

I'd like to emphasise that I am fully aware of the liberties that manufacturers have long taken when stating capacities, and that my "2Tb" drive will be likely to have a maximum real capacity of something like 1.75Tib.

It was originally formatted with unpartitioned exFAT, and while my usual test program would write more than 1Tb of test data to it it invariably glitched at some random point before getting to the read pass which would verify that the block numbers were actually retained. While that could point to flakiness in the drive's microcontroller, the problem might equally be in the comparatively new exFAT support on Linux.

I am able to use gparted to partition and reformat as ext4 or ext2 without error.

Tring to manually run mke2fs with -cc options for a r/w block test is taking about 80 hours per 1% of the drive. In addition, I've not seen explicit verification that it has the two separate passes which would be needed to verify size unambiguously.

I've not yet tried running my own test program, which I trust on smaller media (10s of Gb scale) on this device formatted as ext2.

In cases where my test program was being applied to a block device rather than to a file, I could possibly improve efficiency by adding a --sparse option which only wrote the block number in e.g. a 4K block. This probably wouldn't help if the target was a test file, since (a) the OS might not allocate space for unwritten areas in sparse files and (b) there would be so many layers of translation involved that it would be virtually impossibly to hit the Flash device block boundaries.

Any suggestions would be appreciated.

16
  • Just dd /dev/zero to the raw device and see how far it gets. It's as simple as that. Commented Aug 18, 2022 at 9:33
  • 2
    @Bib that won’t verify that the storage isn’t “duplicated” (e.g. 2TiB provided by having 512GiB of storage and wrapping). Commented Aug 18, 2022 at 9:34
  • That wouldn't work if the first block (numbered zero) was overwritten when the 500 millionth block was written and so on. After writing the entire device, it's necessary to explicitly go back and check that all blocks are correct. Commented Aug 18, 2022 at 9:35
  • @StephenKitt Then the only way is to strip it down, x-ray it and start analysing it. dd'ing it is about as reliable as you are going to get. You could always use /dev/random and create a 2TB file of it first, then dd it to the device then back again and compare. Commented Aug 18, 2022 at 9:37
  • 3
    Seems like there is something for linux called f3: askubuntu.com/questions/737473/…
    – Panki
    Commented Aug 18, 2022 at 9:42

1 Answer 1

2

I found a tool called f3 (fight flash fraud) which appears to do this.

There even seems to be a QT GUI for it.

Github

Documentation

A quote from the readme:

Quick capacity tests with f3probe

f3probe is the fastest drive test and suitable for large disks because it only writes what's necessary to test the drive. It operates directly on the (unmounted) block device and needs to be run as a privileged user:

./f3probe --destructive --time-ops /dev/sdX

Warning

This will destroy any previously stored data on your disk!

4
  • 2
    I've run the basic f3probe a couple of times, which reports that it's a 32Gb device with a consistent block count (but a couple of other details varying). I'm currently reformatting the filesystem on the assumption that the destructive test has corrupted at least one inode table and will report on the result of f3write/f3read. Commented Aug 18, 2022 at 17:26
  • 1
    f3write claimed to have put 93Gb of data on the formatted filesystem before terminating in good order, and still claimed "Free space: 1.79 TB" with no error indication. This is not a combination which inspires confidence in the test program. TBC. Commented Aug 22, 2022 at 15:14
  • f3read started off reporting that roughly 5% of each file was corrupt, then at the 30Gb point flipped to roughly 95% corrupt... both cases also had sporadic I/O errors. The summary said that roughly 30Gb was OK and roughly 90Gb was corrupt. Even if not stated explicitly, the behaviour change at around 30Gb suggests that this is a roughly 30Gb device. Commented Aug 23, 2022 at 7:43
  • Final comment for the record. As above, f3 suggested that "something" happened at around 30Gb but results were inconclusive. My own test program, modified to test blocks sparsely which means that a "2Tb" device can be written in 2.5 hours, shows large numbers of single-bit errors with the first recognisable verify failure around 8Gb in after roughly 3 hours reading. The device is obviously dud, definitely isn't 2Tb, and might be as small as 32Gb. I'm left feeling good about f3, I can't publish my own program due to possible IP ownership issues... besides which, it's written in Pascal :-) Commented Aug 30, 2022 at 14:10

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .