12

So, I'm messing around with urllib.request in Python 3 and am wondering how to write the result of getting an internet file to a file on the local machine. I tried this:

g = urllib.request.urlopen('http://media-mcw.cursecdn.com/3/3f/Beta.png')
with open('test.png', 'b+w') as f:
    f.write(g)

But I got this error:

TypeError: 'HTTPResponse' does not support the buffer interface

What am I doing wrong?

NOTE: I have seen this question, but it's related to Python 2's urllib2 which was overhauled in Python 3.

1

2 Answers 2

11

change

f.write(g)

to

f.write(g.read())
7

An easier way I think (also you can do it in two lines) is to use:

import urllib.request
urllib.request.urlretrieve('http://media-mcw.cursecdn.com/3/3f/Beta.png', 'test.png')

As for the method you have used. When you use g = urllib.request.urlopen('http://media-mcw.cursecdn.com/3/3f/Beta.png') you are just fetching the file. You must use g.read(), g.readlines() or g.readline() to read it it.

It's just like reading a normal file (except for the syntax) and can be treated in a very similar way.

8
  • The PEP20 would have you use Request from urllib.request but yours would have a line less of code. Information about PEP20 for Request. You can use open() chained to file.write(url.read()) like you mentioned.
    – Debug255
    Commented Feb 14, 2018 at 2:36
  • @Debug255 Are you sure? The link mentioned Open the URL url, which can be either a string or a Request object., here I specified a string so I don't think Request is required in this case.
    – Xantium
    Commented Feb 14, 2018 at 8:46
  • That worked on debian9 using python3.5. I don't use 2.7 too much.
    – Debug255
    Commented Mar 13, 2018 at 4:27
  • 1
    This doesn't work if you have to get round the 403: Forbidden issue using stackoverflow.com/a/16187955/563247 Commented Apr 29, 2020 at 10:56
  • 1
    @Sevenearths 403 is a Forbidden error. This usually happens when a website (server) attempts to block a bot. Or you try to access a webpage with incorrect login/cert information (usually cookie related from my experience, like passing outdated information, or similar). Seen as the solution you listed uses a user agent, it strongly looks like that site attepts to block bots (which makes sense since it's a news site) a user agent tricks the server into thinking it's a legitimate browser.
    – Xantium
    Commented Apr 29, 2020 at 13:36

Not the answer you're looking for? Browse other questions tagged or ask your own question.