Is there a way to separate wget
's download and --convert-links
functionality? For those unfamiliar with wget
and/or --convert-links
, long story short, wget can be used to download a website. --convert-links
modifies the downloaded html files so the downloaded website works off-line. It does that by converting the href
/src
/etc. attributes to reference local files instead of the remote website.
This is the official explanation:
-k --convert-links
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
• The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation works reliably for arbitrary combinations of directories.
• The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads.
If a (recursive) download gets interrupted & resumed manually, or if one fails to specify -k
to begin with, how can one get sane links inside the html
files?
It seems not even --backup-converted
can make the process more robust, as either wget converts links right after downloading everything (no missing files), or you're on your own (xpath etc)