A couple of weeks ago I bought a "2Tb" thumb drive of uncertain origin, with the intention of using this as intermediate storage when rebuilding a system (three releases forwards of Debian).
Can anybody suggest an efficient way of verifying the actual size of this, i.e. that it actually has "2Tb" of Flash rather than a single "500Mb" device repeating in the storage space?
I'd like to emphasise that I am fully aware of the liberties that manufacturers have long taken when stating capacities, and that my "2Tb" drive will be likely to have a maximum real capacity of something like 1.75Tib.
It was originally formatted with unpartitioned exFAT, and while my usual test program would write more than 1Tb of test data to it it invariably glitched at some random point before getting to the read pass which would verify that the block numbers were actually retained. While that could point to flakiness in the drive's microcontroller, the problem might equally be in the comparatively new exFAT support on Linux.
I am able to use gparted to partition and reformat as ext4 or ext2 without error.
Tring to manually run mke2fs with -cc options for a r/w block test is taking about 80 hours per 1% of the drive. In addition, I've not seen explicit verification that it has the two separate passes which would be needed to verify size unambiguously.
I've not yet tried running my own test program, which I trust on smaller media (10s of Gb scale) on this device formatted as ext2.
In cases where my test program was being applied to a block device rather than to a file, I could possibly improve efficiency by adding a --sparse option which only wrote the block number in e.g. a 4K block. This probably wouldn't help if the target was a test file, since (a) the OS might not allocate space for unwritten areas in sparse files and (b) there would be so many layers of translation involved that it would be virtually impossibly to hit the Flash device block boundaries.
Any suggestions would be appreciated.
f3
: askubuntu.com/questions/737473/…