4

I use wget to call a link every 10 seconds like this:

#/bin/sh
while true; do
    wget http://www.some.random.link/to/some/PHP/script
    sleep 10
done

This produces empty files called script_name.# where # is incremented on every run.

Why does this happen?

2 Answers 2

7

That's because wget downloads the output of your script (which, presumably, is empty) and saves it to a file, appending a number each time so as not to overwrite the previously downloaded file.

There's a couple of options for how to prevent this.

  1. Prevent wget from downloading anything at all, using the --spider option:
    wget --spider http://www.some.random.link/to/some/PHP/script
    However, this may cause your script not to work since, IIRC, it only issues a HEAD request.
  2. Download the output, but discard it by sending it to /dev/null:
    wget -O /dev/null http://www.some.random.link/to/some/PHP/script
  3. Download the output, but delete if afterwards, using the --delete-after option:
    wget --delete-after http://www.some.random.link/to/some/PHP/script
  4. For the sake of completeness, if you can live with one file, use the -nc option to prevent wget from downloading files that already exist locally:
    wget -nc http://www.some.random.link/to/some/PHP/script
1

As an alternative fix, you could avoid using wget — which, as its name implies, is intended for getting web content from a given URL. In other words, it's built to be a "download" tool. (It is, of course, very flexible, as Indrek demonstrated in his answer. But its defaults work against your purposes.)

Since you don't need or want to actually download anything (you're just accessing the URL to fire off a server-side script), the curl command is a bit more appropriate for your script. It will write the results of its requests to stdout by default, instead of to a file.

So, replacing wget with curl in your code above should fix the file-creation problem without any further work.

2
  • Specifically curl --silent <address> 1>/dev/null will do the job. Commented Apr 23, 2012 at 11:33
  • Generally, yes, that'd allow curl to operate silently regardless of the content being retrieved. However, not even really necessary in this case, since the remote server doesn't output anything (wget was "downloading" empty files), and curl is wonderfully free of output unless something goes wrong or there's data retrieved. That's what made curl so ideally suited to this particular application.
    – FeRD
    Commented Dec 16, 2012 at 12:38

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .