Python Requests Library

This article covers the use of the Python Requests Library.

The Python requests library lets you easily download files from the Web without having to worry about many complicated issues such as network errors, connection problems, and data compression.

The Requests module was created as an alternative to the Python urllib2 module, which has unnecessary complexity and lack of features when compared to the requests library.

Python requests does not come natively with python, so you’ll have to download it through the command prompt or something similar. Refer to our Python Installation guide if you’re having trouble. If it’s successfully installed, you should be able to run the following line in your IDE of choice without any errors raised.

import requests

Python requests by itself can be a little limited (unless your focus entirely is creating connections). It’s great for establishing hassle free connections and sending and receiving data over that connection, but it’s not designed to actually do much past that point. For instance Python requests is commonly used alongside other web related Python libraries like Selenium and Beautiful Soup. Both these library require requests to create a connection before they can do anything.

Python Requests Library

Python requests has many functions such as the get(), post(), delete() and put(). put() and delete() are rarely used however so we won’t be discussing them here save for the very basic syntax which is shown below. Keep in mind that each of these functions have many additional parameters involved in real life scenarios.

import requests

resp = requests.get('')
resp ='')
resp = requests.put('')
resp = requests.delete('')

Every time one of these functions is called, a response object is generated with all the response data. This data contains information regarding the status of the request, the encoding used and the contents of the request.

The Get Function

Due to how commonly the get function is used, our main focus in this article will be centered around this function. Let’s start off with a simple get request. We’ll be connecting to the site which is designed to be receiving https requests.

>>> import requests
>>> response = requests.get('')

The first thing to do is to make sure connected successfully. As mentioned before, the returned response object contains information regarding the status of the connection. You can check a full list of status codes here, but generally anything in the 2XX range means a successful connection, 3XX means a redirection, and 4XX and 5XX are client and server errors.

>>> response.status_code

As you can see, due to a successful connection the number 200 was returned. We can further use these returned numbers to make decisions in our code. See the following example.

import requests

response = requests.get('')
code = response.status_code

if code >= 200 and code < 300:
elif code >= 300 and code < 400:

Keep in mind that 2XX status codes only mean that the connection was successful in a general sense. For instance, 204 means the connection was successful but no content was returned.


Next up is headers. These usually contain vital information regarding the connection just made. Headers are only returned if the connection was successful. Calling the below command returns this information in the form of a dictionary.

>>> response.headers
{'Date': 'Tue, 28 Apr 2020 06:25:40 GMT', 'Content-Type': 'application/json', 'Content-Length': '307', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true'}

Above you can see information such as the exact time the connection was made, the content type, server type and content length. You can access each of these individually in the same key – value format as you would in dictionaries.


Now we’ll discuss how to retrieve the content stored in the response object.

Below are three different ways to do so. One thing they all have in common is that they return the same data, but the type in which they return it differs. response.content returns the content in bytes, response.text returns it in a string format, whereas response.json returns it in a dictionary format.

>>> response.content
>>> response.text
>>> response.json

Authenticated requests

While you can access most of the without the use of a username and password, many areas will be off limits to you without one. The most common example is logging into one’s account. Each website has it’s own peculiarities so we’ll just discuss a general example here.

URL = ''
requests.get(URL, auth = ('[email protected]', 'mypassword'))

You have to pass string values of both your username (or email) and password into the auth parameter. Attempting to access the web page with the use of password and email will result in a 401 error.

If you’re interested in learning how to automate logins, you may also be interested in learning the python web scraping libraries, selenium and BeautifulSoup. They offer an alternative way to automate logins.

Disabling SSL validation

While SSL sis an important security feature to have, so much so that requests keep SSL validation on by default, there may be times you wish to turn it off. Luckily, you can do so simply by assigning the verify parameter the False value.

>>> requests.get('', verify=False)
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See:
<Response [200]>

Interested in learning more about Networking? Check out this article on awesome books for Computer Networks!

This marks the end of the Python Requests Library Tutorial. Any suggestions or contributions for CodersLegacy are than welcome. Any questions can be directed to the comments section below.

Notify of
Inline Feedbacks
View all comments