all repos — gemini-redirect @ a59ca1351120174591b7ba9c8808cb11a27d2dfb

content/blog/asyncio/index.md (view raw)

  1+++
  2title = "An Introduction to Asyncio"
  3date = 2018-06-13
  4updated = 2020-10-03
  5[taxonomies]
  6category = ["sw"]
  7tags = ["python", "asyncio"]
  8+++
  9
 10Index
 11-----
 12
 13* [Background](#background)
 14* [Input / Output](#input_output)
 15* [Diving In](#diving_in)
 16* [A Toy Example](#a_toy_example)
 17* [A Real Example](#a_real_example)
 18* [Extra Material](#extra_material)
 19
 20
 21Background
 22----------
 23
 24After seeing some friends struggle with `asyncio` I decided that it could be a good idea to write a blog post using my own words to explain how I understand the world of asynchronous IO. I will focus on Python's `asyncio` module but this post should apply to any other language easily.
 25
 26So what is `asyncio` and what makes it good? Why don't we just use the old and known threads to run several parts of the code concurrently, at the same time?
 27
 28The first reason is that `asyncio` makes your code easier to reason about, as opposed to using threads, because the amount of ways in which your code can run grows exponentially. Let's see that with an example. Imagine you have this code:
 29
 30```python
 31def method():
 32	line 1
 33	line 2
 34	line 3
 35	line 4
 36	line 5
 37```
 38
 39And you start two threads to run the method at the same time. What is the order in which the lines of code get executed? The answer is that you can't know! The first thread can run the entire method before the second thread even starts. Or it could be the first thread that runs after the second thread. Perhaps both run the "line 1", and then the line 2. Maybe the first thread runs lines 1 and 2, and then the second thread only runs the line 1 before the first thread finishes.
 40
 41As you can see, any combination of the order in which the lines run is possible. If the lines modify some global shared state, that will get messy quickly.
 42
 43Second, in Python, threads *won't* make your code faster most of the time. It will only increase the concurrency of your program (which is okay if it makes many blocking calls), allowing you to run several things at the same time.
 44
 45If you have a lot of CPU work to do though, threads aren't a real advantage. Indeed, your code will probably run slower under the most common Python implementation, CPython, which makes use of a Global Interpreter Lock (GIL) that only lets a thread run at once. The operations won't run in parallel!
 46
 47Input / Output
 48--------------
 49
 50Before we go any further, let's first stop to talk about input and output, commonly known as "IO". There are two main ways to perform IO operations, such as reading or writing from a file or a network socket.
 51
 52The first one is known as "blocking IO". What this means is that, when you try performing IO, the current application thread is going to *block* until the Operative System can tell you it's done. Normally, this is not a problem, since disks are pretty fast anyway, but it can soon become a performance bottleneck. And network IO will be much slower than disk IO!
 53
 54```python
 55import socket
 56
 57# Setup a network socket and a very simple HTTP request.
 58# By default, sockets are open in blocking mode.
 59sock = socket.socket()
 60request = b'''HEAD / HTTP/1.0\r
 61Host: example.com\r
 62\r
 63'''
 64
 65# "connect" will block until a successful TCP connection
 66# is made to the host "example.com" on port 80.
 67sock.connect(('example.com', 80))
 68
 69# "sendall" will repeatedly call "send" until all the data in "request" is
 70# sent to the host we just connected, which blocks until the data is sent.
 71sock.sendall(request)
 72
 73# "recv" will try to receive up to 1024 bytes from the host, and block until
 74# there is any data to receive (or empty if the host closes the connection).
 75response = sock.recv(1024)
 76
 77# After all those blocking calls, we got out data! These are the headers from
 78# making a HTTP request to example.com.
 79print(response.decode())
 80```
 81
 82Blocking IO offers timeouts, so that you can get control back in your code if the operation doesn't finish. Imagine that the remote host doesn't want to reply, your code would be stuck for as long as the connection remains alive!
 83
 84But wait, what if we make the timeout small? Very, very small? If we do that, we will never block waiting for an answer. That's how asynchronous IO works, and it's the opposite of blocking IO (you can also call it non-blocking IO if you want to).
 85
 86How does non-blocking IO work if the IO device needs a while to answer with the data? In that case, the operative system responds with "not ready", and your application gets control back so it can do other stuff while the IO device completes your request. It works a bit like this:
 87
 88```
 89<app> Hey, I would like to read 16 bytes from this file
 90<OS> Okay, but the disk hasn't sent me the data yet
 91<app> Alright, I will do something else then
 92(a lot of computer time passes)
 93<app> Do you have my 16 bytes now?
 94<OS> Yes, here they are! "Hello, world !!\n"
 95```
 96
 97In reality, you can tell the OS to notify you when the data is ready, as opposed to polling (constantly asking the OS whether the data is ready yet or not), which is more efficient.
 98
 99But either way, that's the difference between blocking and non-blocking IO, and what matters is that your application gets to run more without ever needing to wait for data to arrive, because the data will be there immediately when you ask, and if it's not yet, your app can do more things meanwhile.
100
101
102Diving In
103---------
104
105Now we've seen what blocking and non-blocking IO is, and how threads make your code harder to reason about, but they give concurrency (yet not more speed). Is there any other way to achieve this concurrency that doesn't involve threads? Yes! The answer is `asyncio`.
106
107So how does `asyncio` help? First we need to understand a very crucial concept before we can dive any deeper, and I'm talking about the *event loop*. What is it and why do we need it?
108
109You can think of the event loop as a *loop* that will be responsible for calling your `async` functions:
110
111![The Event Loop](eventloop.svg)
112
113That's silly you may think. Now not only we run our code but we also have to run some "event loop". It doesn't sound beneficial at all. What are these events? Well, they are the IO events we talked about before!
114
115`asyncio`'s event loop is responsible for handling those IO events, such as file is ready, data arrived, flushing is done, and so on. As we saw before, we can make these events non-blocking by setting their timeout to 0.
116
117Let's say you want to read from 10 files at the same time. You will ask the OS to read data from 10 files, and at first none of the reads will be ready. But the event loop will be constantly asking the OS to know which are done, and when they are done, you will get your data.
118
119This has some nice advantages. It means that, instead of waiting for a network request to send you a response or some file, instead of blocking there, the event loop can decide to run other code meanwhile. Whenever the contents are ready, they can be read, and your code can continue. Waiting for the contents to be received is done with the `await` keyword, and it tells the loop that it can run other code meanwhile:
120
121![Step 1, await keyword](awaitkwd1.svg)
122
123![Step 2, await keyword](awaitkwd2.svg)
124
125Start reading the code of the event loop and follow the arrows. You can see that, in the beginning, there are no events yet, so the loop calls one of your functions. The code runs until it has to `await` for some IO operation to complete, such as sending a request over the network. The method is "paused" until an event occurs (for example, an "event" occurs when the request has been sent completely).
126
127While the first method is busy, the event loop can enter the second method, and run its code until the first `await`. But it can happen that the event of the second query occurs before the request on the first method, so the event loop can re-enter the second method because it has already sent the query, but the first method isn't done sending the request yet.
128
129Then, the second method `await`'s for an answer, and an event occurs telling the event loop that the request from the first method was sent. The code can be resumed again, until it has to `await` for a response, and so on. Here's an explanation with pseudo-code for this process if you prefer:
130
131```python
132async def method(request):
133    prepare request
134    await send request
135
136    await receive request
137
138    process request
139    return result
140
141run in parallel (
142	method with request 1,
143	method with request 2,
144)
145```
146
147This is what the event loop will do on the above pseudo-code:
148
149```
150no events pending, can advance
151
152enter method with request 1
153	prepare request
154	await sending request
155pause method with request 1
156
157no events ready, can advance
158
159enter method with request 2
160	prepare request
161	await sending request
162pause method with request 2
163
164both requests are paused, cannot advance
165wait for events
166event for request 2 arrives (sending request completed)
167
168enter method with request 2
169	await receiving response
170pause method with request 2
171
172event for request 1 arrives (sending request completed)
173
174enter method with request 1
175	await receiving response
176pause method with request 1
177
178...and so on
179```
180
181You may be wondering "okay, but threads work for me, so why should I change?". There are some important things to note here. The first is that we only need one thread to be running! The event loop decides when and which methods should run. This results in less pressure for the operating system. The second is that we know when it may run other methods. Those are the `await` keywords! Whenever there is one of those, we know that the loop is able to run other things until the resource (again, like network) becomes ready (when a event occurs telling us it's ready to be used without blocking or it has completed).
182
183So far, we already have two advantages. We are only using a single thread so the cost for switching between methods is low, and we can easily reason about where our program may interleave operations.
184
185Another advantage is that, with the event loop, you can easily schedule when a piece of code should run, such as using the method [`loop.call_at`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.call_at), without the need for spawning another thread at all.
186
187To tell the `asyncio` to run the two methods shown above, we can use [`asyncio.ensure_future`](https://docs.python.org/3/library/asyncio-future.html#asyncio.ensure_future), which is a way of saying "I want the future of my method to be ensured". That is, you want to run your method in the future, whenever the loop is free to do so. This method returns a `Future` object, so if your method returns a value, you can `await` this future to retrieve its result.
188
189What is a `Future`? This object represents the value of something that will be there in the future, but might not be there yet. Just like you can `await` your own `async def` functions, you can `await` these `Future`'s.
190
191The `async def` functions are also called "coroutines", and Python does some magic behind the scenes to turn them into such. The coroutines can be `await`'ed, and this is what you normally do.
192
193
194A Toy Example
195-------------
196
197That's all about `asyncio`! Let's wrap up with some example code. We will create a server that replies with the text a client sends, but reversed. First, we will show what you could write with normal synchronous code, and then we will port it.
198
199Here is the **synchronous version**:
200
201```python
202# server.py
203import socket
204
205
206def server_method():
207	# create a new server socket to listen for connections
208	server = socket.socket()
209
210	# bind to localhost:6789 for new connections
211	server.bind(('localhost', 6789))
212
213	# we will listen for one client at most
214	server.listen(1)
215
216	# *block* waiting for a new client
217	client, _ = server.accept()
218
219	# *block* waiting for some data
220	data = client.recv(1024)
221
222	# reverse the data
223	data = data[::-1]
224
225	# *block* sending the data
226	client.sendall(data)
227
228	# close client and server
229	server.close()
230	client.close()
231
232
233if __name__ == '__main__':
234	# block running the server
235	server_method()
236```
237
238```python
239# client.py
240import socket
241
242
243def client_method():
244	message = b'Hello Server!\n'
245	client = socket.socket()
246
247	# *block* trying to stabilish a connection
248	client.connect(('localhost', 6789))
249
250	# *block* trying to send the message
251	print('Sending', message)
252	client.sendall(message)
253
254	# *block* until we receive a response
255	response = client.recv(1024)
256	print('Server replied', response)
257
258	client.close()
259
260
261if __name__ == '__main__':
262	client_method()
263```
264
265From what we've seen, this code will block on all the lines with a comment above them saying that they will block. This means that for running more than one client or server, or both in the same file, you will need threads. But we can do better, we can rewrite it into `asyncio`!
266
267The first step is to mark all your `def`initions that may block with `async`. This marks them as coroutines, which can be `await`ed on.
268
269Second, since we're using low-level sockets, we need to make use of the methods that `asyncio` provides directly. If this was a third-party library, this would be just like using their `async def`initions.
270
271Here is the **asynchronous version**:
272
273```python
274# server.py
275import asyncio
276import socket
277
278# get the default "event loop" that we will run
279loop = asyncio.get_event_loop()
280
281
282# notice our new "async" before the definition
283async def server_method():
284	server = socket.socket()
285	server.bind(('localhost', 6789))
286	server.listen(1)
287
288	# await for a new client
289	# the event loop can run other code while we wait here!
290	client, _ = await loop.sock_accept(server)
291
292	# await for some data
293	data = await loop.sock_recv(client, 1024)
294	data = data[::-1]
295
296	# await for sending the data
297	await loop.sock_sendall(client, data)
298
299	server.close()
300	client.close()
301
302
303if __name__ == '__main__':
304	# run the loop until "server method" is complete
305	loop.run_until_complete(server_method())
306```
307
308```python
309# client.py
310import asyncio
311import socket
312
313loop = asyncio.get_event_loop()
314
315
316async def client_method():
317	message = b'Hello Server!\n'
318	client = socket.socket()
319
320	# await to stabilish a connection
321	await loop.sock_connect(client, ('localhost', 6789))
322
323	# await to send the message
324	print('Sending', message)
325	await loop.sock_sendall(client, message)
326
327	# await to receive a response
328	response = await loop.sock_recv(client, 1024)
329	print('Server replied', response)
330
331	client.close()
332
333
334if __name__ == '__main__':
335	loop.run_until_complete(client_method())
336```
337
338That's it! You can place these two files separately and run, first the server, then the client. You should see output in the client.
339
340The big difference here is that you can easily modify the code to run more than one server or clients at the same time. Whenever you `await` the event loop will run other of your code. It seems to "block" on the `await` parts, but remember it's actually jumping to run more code, and the event loop will get back to you whenever it can.
341
342In short, you need an `async def` to `await` things, and you run them with the event loop instead of calling them directly. So this…
343
344```python
345def main():
346	...  # some code
347
348
349if __name__ == '__main__':
350	main()
351```
352
353…becomes this:
354
355```python
356import asyncio
357
358
359async def main():
360	...  # some code
361
362
363if __name__ == '__main__':
364	asyncio.get_event_loop().run_until_complete(main)
365```
366
367This is pretty much how most of your `async` scripts will start, running the main method until its completion.
368
369
370A Real Example
371--------------
372
373Let's have some fun with a real library. We'll be using [Telethon](https://github.com/LonamiWebs/Telethon) to broadcast a message to our three best friends, all at the same time, thanks to the magic of `asyncio`. We'll dive right into the code, and then I'll explain our new friend `asyncio.wait(...)`:
374
375```python
376# broadcast.py
377import asyncio
378import sys
379
380from telethon import TelegramClient
381
382# (you need your own values here, check Telethon's documentation)
383api_id = 123
384api_hash = '123abc'
385friends = [
386	'@friend1__username',
387	'@friend2__username',
388	'@bestie__username'
389]
390
391# we will have to await things, so we need an async def
392async def main(message):
393	# start is a coroutine, so we need to await it to run it
394	client = await TelegramClient('me', api_id, api_hash).start()
395
396	# wait for all three client.send_message to complete
397	await asyncio.wait([
398		client.send_message(friend, message)
399		for friend in friends
400	])
401
402	# and close our client
403	await client.disconnect()
404
405
406if __name__ == '__main__':
407	if len(sys.argv) != 2:
408		print('You must pass the message to broadcast!')
409		quit()
410
411	message = sys.argv[1]
412	asyncio.get_event_loop().run_until_complete(main(message))
413```
414
415Wait… how did that send a message to all three of
416my friends? The magic is done here:
417
418```python
419[
420	client.send_message(friend, message)
421	for friend in friends
422]
423```
424
425This list comprehension creates another list with three
426coroutines, the three `client.send_message(...)`.
427Then we just pass that list to `asyncio.wait`:
428
429```python
430await asyncio.wait([...])
431```
432
433This method, by default, waits for the list of coroutines to run until they've all finished. You can read more on the Python [documentation](https://docs.python.org/3/library/asyncio-task.html#asyncio.wait). Truly a good function to know about!
434
435Now whenever you have some important news for your friends, you can simply `python3 broadcast.py 'I bought a car!'` to tell all your friends about your new car! All you need to remember is that you need to `await` on coroutines, and you will be good. `asyncio` will warn you when you forget to do so.
436
437
438Extra Material
439--------------
440
441If you want to understand how `asyncio` works under the hood, I recommend you to watch this hour-long talk [Get to grips with asyncio in Python 3](https://youtu.be/M-UcUs7IMIM) by Robert Smallshire. In the video, they will explain the differences between concurrency and parallelism, along with others concepts, and how to implement your own `asyncio` "scheduler" from scratch.