2

I am working under Windows 7, and I want to download all new files from a Windows server directory over HTTPS using wget. In addition, I want to resume the download of large files in case of connection loss during transfer.

When I run

wget.exe --continue --recursive https://<host>:<port>/<some path>/pdf.dll

everything works fine.

But using

wget.exe --continue --no-clobber --recursive https://<host>:<port>/<some path>/pdf.dll

the download is not resumed after a connection loss, but the incomplete file remains on my local file system. The message of wget being:

File '//pdf.dll' already there; not retrieving.

(We want to use the --no-clobber option in order to avoid sending HEAD requests for all files that are already transferred.)

Does this mean that --continue does not work well together with --no-clobber?

2
  • How should wget know that a file is finished downloading without sending a HEAD request to find out the size of the file on the server? Commented Oct 19, 2015 at 17:25
  • I think, that's the point. With --no-clobber no HEAD requests are sent for files that already exist locally, therefore --continue just can't work. Thanks.
    – Hans
    Commented Oct 20, 2015 at 5:45

2 Answers 2

2

That is because you are combining two options (--no-clobber and --continue):

  • --continue : Continue getting a partially-downloaded file
  • --no-clobber: This will clobber/overwrite the previously downloaded file before restarting to download again

As you can see, these two options ask Wget to perform quite opposite tasks, so it does not know what to do by the end. Do not combine them. You can read about download Options in details.

2
  • 3
    Thank you for your reply! Actually, the wget documentation does not explicitly say that --no-clobber prevents --continue to work (there is no time-stamping needed, and no file update either, just count the number ob bytes locally and on server side). On the other hand, you see quite often examples where these options are used together (e.g. here: labnol.org/software/wget-command-examples/28750, example 9). But my experience matches with your answer...
    – Hans
    Commented Oct 19, 2015 at 13:16
  • No, wget has not any option to continue a mirror site with convert-link option on, after a stop. This is awful. Commented Jul 18, 2022 at 14:13
1

Very late to the party but I just came across this problem and have a solution:

If you use the -N Timestamp flag in conjunction with the -c / --continue flag, on resuming, the file sizes of incomplete files will differ from the remote and the file will continue to download as expected. It will also not re-download completed files.

From https://www.gnu.org/software/wget/manual/wget.html#Time_002dStamping

If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the time-stamps say.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .