Python Requests Post and Then Post Again

Python Requests Tutorial — Get and POST Requests in Python

In this Requests tutorial article, you will exist learning all the nuts of the requests module using Python to get you started with using Requests. We volition exist covering the post-obit topics in this web log:

  • What is the Requests module?
  • Installing Requests module
  • Making a GET Requests
  • Downloading Image with Requests
  • Making POST Requests
  • Sending Cookies and Headers
  • Session Objects
  • Conclusion

Permit u.s. begin this "Requests Tutorial" blog by first checking out what the Requests Module really is.

What Is Requests Module?

Requests is a Python module that you can employ to transport all kinds of HTTP requests. It is an easy-to-use library with a lot of features ranging from passing parameters in URLs to sending custom headers and SSL Verification. In this tutorial, yous will acquire how to apply this library to transport simple HTTP requests in Python.

Requests allow yous to transport HTTP/1.1 requests. You can add headers, form information, multi-part files, and parameters with simple Python dictionaries, and admission the response information in the same fashion.

Installing The Requests module

To install requests, simply:

          $ pip install requests        

Or, if you lot absolutely must:

          $ easy_install requests        

Making a Go Request

It is fairly straightforward to send an HTTP request using Requests. You outset by importing the module and then making the request. Cheque out the example:

          import requests            
req = requests.get('https://www.edureka.co/')

So, all the information is stored somewhere, correct?

Yeah, information technology is stored in a Response object called as req.

Allow's say, for instance, you desire the encoding of a web-page so that you lot can verify it or use it somewhere else. This can be done using the req.encoding holding.

An added plus is that you can besides excerpt many features like the status lawmaking for case (of the request). This tin be done using the req.status_code property.

          req.encoding # returns 'utf-8'            
req.status_code # returns 200

We tin can also access the cookies that the server sent back. This is washed using req.cookies, as straightforward as that! Similarly, you can get the response headers equally well. This is done by making apply of req.headers.

Do note that the req.headers belongings will render a case-insensitive dictionary of the response headers. So, what does this imply?

This ways that req.headers['Content-Length'], req.headers['content-length'] and req.headers['CONTENT-LENGTH'] will all return the value of the just the 'Content-Length' response header.

We can also bank check if the response obtained is a well-formed HTTP redirect (or non) that could have been processed automatically using the req.is_redirect property. This will return Truthful or Simulated based on the response obtained.

Yous can also get the time elapsed between sending the asking and getting back a response using another property. Accept a judge? Yeah, information technology is the req.elapsed property.

Remember the URL that you initially passed to the get() role? Well, it can be different than the final URL of the response for many reasons and this includes redirects also.

And to see the bodily response URL, you tin can employ the req.url property.

          import requests
req = requests.get('http://www.edureka.co/')

req.encoding # returns 'utf-8'
req.status_code # returns 200
req.elapsed # returns datetime.timedelta(0, ane, 666890)
req.url # returns 'https://edureka.co/'

req.history
# returns [<Response [301]>, <Response [301]>]

req.headers['Content-Blazon']
# returns 'text/html; charset=utf-8'

Don't you recall that getting all this data about the webpage is nice? Only, the thing is that you most probably want to access the actual content, correct?

If the content you are accessing is text, you tin can e'er apply the req.text holding to access information technology. Do notation that the content is then parsed as Unicode only. You can laissez passer this encoding with which to decode this text using the req.encoding belongings equally nosotros discussed before.

In the instance of non-text responses, you can access them very easily. In fact information technology's done in binary format when you use req.content. This module volition automatically decode gzip and debunk transfer-encodings for u.s.a.. This can be very helpful when yous are dealing directly with media files. Also, you lot can admission the JSON-encoded content of the response every bit well, that is if it exists, using the req.json() role.

Pretty simple and a lot of flexibility, correct?

Also, if needed, you can also get the raw response from the server only by using req.raw. Practise keep in mind that you will have to pass stream=True in the asking to get the raw response equally per demand.

But, some files that you download from the internet using the Requests module may accept a huge size, correct? Well, in such cases, it will not be wise to load the whole response or file in the memory at once. But, it is recommended that you download a file in pieces or chunks using the iter_content(chunk_size = 1, decode_unicode = Faux) method.

And then, this method iterates over the response data in chunk_size number of bytes at in one case. And when the stream=True has been set up on the asking, this method will avoid reading the whole file into memory at once for just the big responses.

Do annotation that the chunk_size parameter tin can exist either an integer or None. Merely, when set to an integer value, chunk_size determines the number of bytes that should be read into the memory at in one case.

When chunk_size is fix to None and stream is set to True, the data will exist read as it arrives in whatever size of chunks are received as and when they are. Simply, when chunk_size is set up to None and stream is set to Simulated, all the data will be returned as a single chunk of information only.

Downloading An Image Using Requests Module

So let's download the following image of a wood on Pixabay using the Requests module we learned near. Hither is the bodily image:

This is the lawmaking that you lot will demand to download the image:

          import requests
req = requests.get('path/to/forest.jpg', stream=True)
req.raise_for_status()
with open up('Forest.jpg', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):
impress('Received a Clamper')
fd.write(chunk)

Note that the 'path/to/forest.jpg' is the bodily epitome URL. Yous can put the URL of any other image hither to download something else also. This is merely an example showed here and the given image file is almost 185kb in size and y'all have prepare chunk_size to 50,000 bytes.

This means that the "Received a Chunk" message should be printed four times in the terminal. The size of the last chunk volition merely exist 39350 bytes considering the role of the file that remains to exist received later on the first three iterations is 39350 bytes.

Requests besides allow you lot to pass parameters in a URL. This is particularly helpful when you are searching for a webpage for some results like a tutorial or a specific image. You can provide these query strings as a lexicon of strings using the params keyword in the Become request. Check out this easy instance:

          import requests

query = {'q': 'Woods', 'order': 'pop', 'min_width': '800', 'min_height': '600'}
req = requests.get('https://pixabay.com/en/photos/', params=query)

req.url
# returns 'https://pixabay.com/en/photos/?society=popular&min_height=600&q=Forest&min_width=800'

Side by side upwards in this "Requests Tutorial" web log, let us expect at how we tin can make a Mail request!

Making a POST Asking

Making a Mail service request is merely equally easy equally making Become requests. You but use the post() office instead of get().

This can exist useful when you are automatically submitting forms. For case, the following code will download the whole Wikipedia page on Nanotechnology and save it on your PC.

          import requests
req = requests.post('https://en.wikipedia.org/w/index.php', data = {'search':'Nanotechnology'})
req.raise_for_status()
with open('Nanotechnology.html', 'wb') as fd:
for clamper in req.iter_content(chunk_size=50000):
fd.write(clamper)

Sending Cookies and Headers

As previously mentioned, you can access the cookies and headers that the server sends dorsum to you using req.cookies and req.headers. Requests also allow you to ship your ain custom cookies and headers with a request. This tin can exist helpful when you want to, allow'south say, prepare a custom user agent for your request.

To add together HTTP headers to a asking, yous can simply pass them in a dict to the headers parameter. Similarly, you can also send your ain cookies to a server using a dict passed to the cookies parameter.

          import requests

url = 'http://some-domain.com/set/cookies/headers'

headers = {'user-agent': 'your-own-user-agent/0.0.1'}
cookies = {'visit-calendar month': 'February'}

req = requests.get(url, headers=headers, cookies=cookies)

Cookies can also exist passed in a Cookie Jar. They provide a more than complete interface to allow you to use those cookies over multiple paths.

Check out this example below:

          import requests

jar = requests.cookies.RequestsCookieJar()
jar.prepare('first_cookie', 'first', domain='httpbin.org', path='/cookies')
jar.ready('second_cookie', '2nd', domain='httpbin.org', path='/extra')
jar.set('third_cookie', 'third', domain='httpbin.org', path='/cookies')

url = 'http://httpbin.org/cookies'
req = requests.get(url, cookies=jar)

req.text

# returns '{ "cookies": { "first_cookie": "start", "third_cookie": "third" }}'

Next upward on this "Requests Tutorial" blog, let us look at session objects!

Session Objects

Sometimes it is useful to preserve sure parameters beyond multiple requests. The Session object does exactly that. For instance, it will persist cookie data beyond all requests made using the same session.

The Session object uses urllib3'south connection pooling. This means that the underlying TCP connection volition be reused for all the requests fabricated to the same host.

This tin can significantly boost the functioning. You can also use methods of the Requests object with the Session object.

Sessions are also helpful when yous want to ship the same data across all requests. For example, if yous decide to send a cookie or a user-agent header with all the requests to a given domain, yous tin use Session objects.

Here is an example:

          import requests

ssn = requests.Session()
ssn.cookies.update({'visit-month': 'February'})

reqOne = ssn.get('http://httpbin.org/cookies')
impress(reqOne.text)
# prints data virtually "visit-month" cookie

reqTwo = ssn.get('http://httpbin.org/cookies', cookies={'visit-year': '2017'})
print(reqTwo.text)
# prints information about "visit-month" and "visit-year" cookie

reqThree = ssn.get('http://httpbin.org/cookies')
impress(reqThree.text)
# prints information about "visit-month" cookie

As you can see, the "visit-month" session cookie is sent with all three requests. Yet, the "visit-year" cookie is sent only during the 2nd asking. There is no mention of the "visit-year" cookie in the 3rd request too. This confirms the fact that cookies or other data assault private requests won't be sent with other session requests.

Conclusion

The concepts discussed in this tutorial should assist y'all make basic requests to a server by passing specific headers, cookies, or query strings.

This volition be very handy when y'all are trying to scrape some web pages for information. Now, you should also be able to automatically download music files and wallpapers from dissimilar websites in one case you lot take figured out a pattern in the URLs.

I hope you have enjoyed this post on Requests Tutorial.If you wish to check out more articles on the market's nigh trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, so you tin refer to Edureka's official site.

Do look out for other manufactures in this series which will explicate the various other aspects of Python and Data Scientific discipline.

1. Python Tutorial

2. Python Programming Language

3. Python Functions

iv. File Treatment in Python

5. Python Numpy Tutorial

six. Scikit Acquire Machine Learning

7. Python Pandas Tutorial

8. Matplotlib Tutorial

9. Tkinter Tutorial

x. PyGame Tutorial

11. OpenCV Tutorial

12. Web Scraping With Python

13. PyCharm Tutorial

14. Auto Learning Tutorial

15. Linear Regression Algorithm from scratch in Python

16. Python for Data Science

17. Python Regex

eighteen. Loops in Python

19. Python Projects

20. Auto Learning Projects

21. Arrays in Python

22. Sets in Python

23. Multithreading in Python

24. Python Interview Questions

25. Java vs Python

26. How To Become A Python Programmer?

27. Python Lambda Functions

28. How Netflix uses Python?

29. What is Socket Programming in Python

xxx. Python Database Connexion

31. Golang vs Python

32. Python Seaborn Tutorial

33. Python Career Opportunities

cohenmatur2001.blogspot.com

Source: https://medium.com/edureka/python-requests-tutorial-30edabfa6a1c

0 Response to "Python Requests Post and Then Post Again"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel