It sounds like what you really want is an application that can download a file in parts from different interfaces and join them together at the end. Eg, if you knew your file was 100MB, and you wanted to grab chunks of roughly 10MB, you'd want to do:
1. start a download of bytes 0..10000000 on eth0, saving to filename.part1
2. start a download of bytes 10000001..20000000 on eth0:0, saving to `filename.part2
3. start a download of bytes 20000001..30000000 on eth0:1, saving to filename.part3
...
N-1. wait for all downloads to complete
N. join all filename.part* together to get filename.complete
I know wget
can resume a partially-downloaded file. I'm pretty sure that works by reading to the end of the existing file and then requesting the file, starting from the next byte, from the server.
It looks like curl
supports partial downloads like this using the --range <byterange>
option. So you could script the above steps like so:
1. curl --interface eth0 --range 0,10000000 http://some.server.com/bigfile -o bigfile.part1
2. curl --interface eth0:0 --range 10000001,20000000 http://some.server.com/bigfile -o bigfile.part2
...
N. cat bigfile.part* > bigfile
Caveats: this doesn't always work; if the HTTP/1.1 server doesn't have the partial-download feature enabled, you'll get the whole file on each call. See man curl
for details on the --range
option.
Edit: fixed byte ranges in examples