So I want to download all images from a web server, particularly jpegs. The command I am running looks legit and I know the website has jpegs on it. So for example
wget -r -P C:/ -A.jpg http://somesitewithjpegs.com
It is my understanding that this command will scan the whole server recursively searching dutifully only for jpeg images and then download those images to my C:/ drive. For some reason this is not working.
Looking at the source code I can see that the images are not actually directly embedded in the page but are rather hosted in another directory on the server. Is this why wget is failing to download these images?