Skip to main content
fix wrong byte range examples
Source Link
quack quixote
  • 42.9k
  • 14
  • 108
  • 130

It sounds like what you really want is an application that can download a file in parts from different interfaces and join them together at the end. Eg, if you knew your file was 100mb100MB, and you wanted to grab chunks of roughly 10MB, you'd want to do:

1. start a download of bytes 0..1024010000000 on eth0, saving to filename.part1
2. start a download of bytes 1024110000001..2048020000000 on eth0:0, saving to `filename.part2
3. start a download of bytes 2048120000001..3072030000000 on eth0:1, saving to filename.part3
...
N-1. wait for all downloads to complete
N. join all filename.part* together to get filename.complete

I know wget can resume a partially-downloaded file. I'm pretty sure that works by reading to the end of the existing file and then requesting the file, starting from the next byte, from the server.

It looks like curl supports partial downloads like this using the --range <byterange> option. So you could script the above steps like so:

1. curl --interface eth0 --range 0,1024010000000 http://some.server.com/bigfile -o bigfile.part1
2. curl --interface eth0:0 --range 1024110000001,2048020000000 http://some.server.com/bigfile -o bigfile.part2
... 
N. cat bigfile.part* > bigfile

Caveats: this doesn't always work; if the HTTP/1.1 server doesn't have the partial-download feature enabled, you'll get the whole file on each call. See man curl for details on the --range option.

Edit: fixed byte ranges in examples

It sounds like what you really want is an application that can download a file in parts from different interfaces and join them together at the end. Eg, if you knew your file was 100mb, you'd want to do:

1. start a download of bytes 0..10240 on eth0, saving to filename.part1
2. start a download of bytes 10241..20480 on eth0:0, saving to `filename.part2
3. start a download of bytes 20481..30720 on eth0:1, saving to filename.part3
...
N-1. wait for all downloads to complete
N. join all filename.part* together to get filename.complete

I know wget can resume a partially-downloaded file. I'm pretty sure that works by reading to the end of the existing file and then requesting the file, starting from the next byte, from the server.

It looks like curl supports partial downloads like this using the --range <byterange> option. So you could script the above steps like so:

1. curl --interface eth0 --range 0,10240 http://some.server.com/bigfile -o bigfile.part1
2. curl --interface eth0:0 --range 10241,20480 http://some.server.com/bigfile -o bigfile.part2
... 

Caveats: this doesn't always work; if the HTTP/1.1 server doesn't have the partial-download feature enabled, you'll get the whole file on each call. See man curl for details on the --range option.

It sounds like what you really want is an application that can download a file in parts from different interfaces and join them together at the end. Eg, if you knew your file was 100MB, and you wanted to grab chunks of roughly 10MB, you'd want to do:

1. start a download of bytes 0..10000000 on eth0, saving to filename.part1
2. start a download of bytes 10000001..20000000 on eth0:0, saving to `filename.part2
3. start a download of bytes 20000001..30000000 on eth0:1, saving to filename.part3
...
N-1. wait for all downloads to complete
N. join all filename.part* together to get filename.complete

I know wget can resume a partially-downloaded file. I'm pretty sure that works by reading to the end of the existing file and then requesting the file, starting from the next byte, from the server.

It looks like curl supports partial downloads like this using the --range <byterange> option. So you could script the above steps like so:

1. curl --interface eth0 --range 0,10000000 http://some.server.com/bigfile -o bigfile.part1
2. curl --interface eth0:0 --range 10000001,20000000 http://some.server.com/bigfile -o bigfile.part2
... 
N. cat bigfile.part* > bigfile

Caveats: this doesn't always work; if the HTTP/1.1 server doesn't have the partial-download feature enabled, you'll get the whole file on each call. See man curl for details on the --range option.

Edit: fixed byte ranges in examples

Source Link
quack quixote
  • 42.9k
  • 14
  • 108
  • 130

It sounds like what you really want is an application that can download a file in parts from different interfaces and join them together at the end. Eg, if you knew your file was 100mb, you'd want to do:

1. start a download of bytes 0..10240 on eth0, saving to filename.part1
2. start a download of bytes 10241..20480 on eth0:0, saving to `filename.part2
3. start a download of bytes 20481..30720 on eth0:1, saving to filename.part3
...
N-1. wait for all downloads to complete
N. join all filename.part* together to get filename.complete

I know wget can resume a partially-downloaded file. I'm pretty sure that works by reading to the end of the existing file and then requesting the file, starting from the next byte, from the server.

It looks like curl supports partial downloads like this using the --range <byterange> option. So you could script the above steps like so:

1. curl --interface eth0 --range 0,10240 http://some.server.com/bigfile -o bigfile.part1
2. curl --interface eth0:0 --range 10241,20480 http://some.server.com/bigfile -o bigfile.part2
... 

Caveats: this doesn't always work; if the HTTP/1.1 server doesn't have the partial-download feature enabled, you'll get the whole file on each call. See man curl for details on the --range option.