2

I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage?

IE does the HTML page actually get fetch on the urlopen call or the read() call?

handle = urllib2.urlopen(url)
html = handle.read()

The reason I ask is for this workflow...

  • I have a list of urls (some of them with short url services)
  • I only want to read the webpage if I haven't seen that url before
  • I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not.
  • I don't want to incur the overhead of having to grab the html if I've already parsed that page.

thanks!

1
  • For others, can you tell us what scraping library you're using? Scrapy? Commented Jun 9, 2010 at 20:11

6 Answers 6

6

I just ran a test with wireshark. When I called urllib2.urlopen( 'url-for-a-700mbyte-file'), only the headers and a few packets of body were retrieved immediately. It wasn't until I called read() that the majority of the body came across the network. This matches what I see by reading the source code for the httplib module.

So, to answer the original question, urlopen() does not fetch the whole body over the network. It fetches the headers and usually some of the body. The rest of the body is fetched when you call read().

The partial body fetch is to be expected, because:

  1. Unless you read an http response one byte at a time, there is no way to know exactly how long the incoming headers will be and therefore no way to know how many bytes to read before the body starts.

  2. An http client has no control of how many bytes a server bundles into each tcp frame of a response.

In practice, since some of the body is usually fetched along with the headers, you might find that small bodies (e.g. small html pages) are fetched entirely on the urlopen() call.

3
  • I bet the not-yet-retrieved packets of the body are clogging up OS or HW buffers (anywhere between server and client included -- possibly on routers &c in the middle)... hardly a good thing:-(. Commented Jun 9, 2010 at 21:03
  • 4
    Alex, I appreciate your concern, but nothing is getting clogged. HTTP runs atop TCP, which implements flow control. en.wikipedia.org/wiki/…
    – ʇsәɹoɈ
    Commented Jun 9, 2010 at 21:09
  • but over this concern of what gets fetched while using urllib2.urlopen isn't request module useful ? and if one really wants to know the size of page requests.head(url, headers={'Accept-Encoding': 'identity'}).headers.get('content-length', None) can be used. considering support for range in http ! Commented Jul 9, 2014 at 15:13
3

urllib2 always uses HTTP method GET (or POST) and therefore inevitably gets the full page. To use HTTP method HEAD instead (which only gets the headers -- which are enough to follow redirects!), I think you just need to subclass urllib2.Request with your own class and override one short method:

class MyRequest(urllib2.Request):

    def get_method(self):
        return "HEAD"

and pass a suitably initialized instance of MyRequest to urllib2.urlopen.

3
  • Actually, testing with python 2.6 shows that only a little of the body is retrieved over the network in the urlopen() call. The rest waits until read() is called.
    – ʇsәɹoɈ
    Commented Jun 9, 2010 at 20:28
  • @Forest, the GET verb of HTTP is defined to retrieve the whole page; possibly the part of it that you're not seeing is clogging up OS and networking HW buffers (a pretty bad thing, BTW). Commented Jun 9, 2010 at 21:01
  • I believe the question was not about HTTP methods, but instead about what happens on the network at various stages of the urllib2 implementation. See my answer for details.
    – ʇsәɹoɈ
    Commented Jun 9, 2010 at 21:06
1

Testing with a local web server, urllib2.urlopen(url) fires an HTTP request, and .read() does not.

0

On a side note, if you use Scrapy, it does HEAD intelligently for you. There's no point in rolling your own solution when this is already done so well elsewhere.

0

You can choose to read a part of the response with something like...

urllib2.Request(url, None, requestHeaders).read(CHUNKSIZE)

This only reads CHUNKSIZE bytes back from the server, I've just checked.

-1

From looking at the docs and source I'm pretty sure it gets the contents of the page. The returned object contains the page.

Not the answer you're looking for? Browse other questions tagged or ask your own question.