I'd like to download some Wikipedia pages as part of a program. Currently I would go to the pages on a browser, click "Download as PDF" manually, wait for it to render, then download it, then rename it and move it to the location I want. I'd like to automate this either by doing the downloading them in bash or Python. Is this possible (without using webdriver)?
-
1Just two quick links to get you started: stackoverflow.com/a/627606/935614 and mediawiki.org/wiki/API:Main_page. Or do you want them as PDF and not only the content?– nixdaCommented Aug 25, 2015 at 23:23
-
1Thanks. I just want to dump them as pdf as reference (for something that I might read later). I don't need to parse the content.– ceiling catCommented Aug 26, 2015 at 16:50
Add a comment
|