1

Currently I use httpfs2 to mount direct web urls in the filesystem. For example a rar archive which has to be decompressed, but I don't have enough space on my VPS to download AND then decompress the whole file (2x space). However the reading speed is very slow with httpfs2, when I try to download the file with wget, I get 10 MB / sec speed at least, when I try to copy the httpfs2 mounted archive, I get only 600 KB /sec speed in Midnight Commander. What can I do to achieve nearly the same speed what the connection allows?

1 Answer 1

2

You can't have the same performance with httpfs2, because of HTTP overhead. For 10 MB of file you have ~100 requests, each of one asking for ~100 KB of data (source: wireshark) and latency will kill performance. You could probably tune FUSE in order to get bigger chunks, but it will probably consume more memory.

Another option would be start asking the next chunk before the current chunk transfer is completed or download multiple chunks at the same time, but some server don't allow it.

A possible much easier solution would be to use a pipeline, like:

$ curl http://server/file.tar.gz | tar xzv

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .