for backup purposes, I have transfered a very big binary file over a comparably upstream-wise slow connection (transfer took 2 weeks), by rsyncing it on a mounted cifs-share (so I could and can access it block-wise). After the 2 weeks, rsync showed an error (unfortunately couldn't save it) but the file sized matched.Also
tail -c 1000000000 myfile.img|md5sum # and
head -c 1000000000 myfile.img|md5sum
match, so the beginning and end of the file are identical.
Since my downstream is much faster, I downloaded the full image again and did md5 sums over the whole thing, and those do NOT match. So, apparently , somewhere in those 1.5TB is at least one bit that differs.
Is there a way so generate a "patch" from the two files I downloaded and then apply it on the remote file, so that only the wrong blocks have to be transfered again?
Please note again: I do NOT have the power to execute code remotely or make use of rsync's capabilities that require running rsync remotely. I guess I could still use rsync and it works in the order of magnitude of my download rate, but I wonder if there is a better way making use of the fact that I have both version locally. It would probably not be that hard to write something up, but I would prefer using something tested and save the work.