Skip to main content
The 2024 Developer Survey results are live! See the results

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

15
  • 8
    How does this handle large files, does everything get stored into memory or can this be written to a file without large memory requirement? Commented Dec 17, 2012 at 16:05
  • 9
    It is possible to stream large files by setting stream=True in the request. You can then call iter_content() on the response to read a chunk at a time.
    – kvance
    Commented Jul 28, 2013 at 17:14
  • 8
    Why would a url library need to have a file unzip facility? Read the file from the url, save it and then unzip it in whatever way floats your boat. Also a zip file is not a 'folder' like it shows in windows, Its a file.
    – Harel
    Commented Nov 15, 2013 at 16:36
  • 2
    @Ali: r.text: For text or unicode content. Returned as unicode. r.content: For binary content. Returned as bytes. Read about it here: docs.python-requests.org/en/latest/user/quickstart
    – hughdbrown
    Commented Jan 17, 2016 at 18:44
  • 6
    I think a chunk_size argument is desirable along with stream=True. The default chunk_size is 1, which means, each chunk could be as small as 1 byte and so is very inefficient.
    – haridsv
    Commented Oct 1, 2018 at 10:54