Load testing with Python

sonic182 | Jan 10, 2020 min read

In this post I’ll show how to do some load testing with Python to our http application.

In this case we will use the package called aioload, which is a very nice and minimal tool for load testing with Python.

This tool requires Python >= 3.6 and it suggests uvloop for faster performance

# optional, highly recommended
pip install uvloop

Now we need something to test, for this sample I’ll do a flask microservice with a minimal logic that we should optimize

from flask import Flask, escape, request
from time import sleep

app = Flask(__name__)
application = app

@app.route('/')
def hello():
    name = request.args.get("name", "World")
    sleep(0.1)  # slow code to remove
    return 'Hello, {}!'.format(escape(name))

if __name__ == '__main__':
    app.run('0.0.0.0', 8080)

Now we can run this example with gunicorn

# install dependencies
pip install flask gunicorn
# run app with gunicorn
gunicorn --bind 0.0.0.0:8080 service
[2020-01-10 19:53:40 +0100] [62437] [INFO] Starting gunicorn 20.0.4
[2020-01-10 19:53:40 +0100] [62437] [INFO] Listening at: http://0.0.0.0:8080 (62437)
[2020-01-10 19:53:40 +0100] [62437] [INFO] Using worker: sync
[2020-01-10 19:53:40 +0100] [62440] [INFO] Booting worker with pid: 62440

We can check the service is running with curl by doing

curl "http://localhost:8080/?name=Python"
Hello, Python!%

Aioload

To use aioload we need to define a config file in order to start our test, in the project repository there is a very nice sample to start from. In our example we will use:

[http]
sock_read = 30
sock_connect = 3

[test]
url = "http://localhost:8080/"
method = "GET"

[params]
# We will get Hello, Python! in response
name = "Python"

[headers]

Now, let’s execute our test

aioload -n 100 -c 30 test.ini -v
2020-01-10 20:37:47,284 - __init__:43 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - Starting script... -
2020-01-10 20:37:47,285 - runner:23 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - preparing_requests -
2020-01-10 20:37:47,286 - runner:28 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - prepared_requests -
2020-01-10 20:37:47,287 - runner:89 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - starting_requests -
2020-01-10 20:37:57,678 - runner:119 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - done - min=132.7ms; max=3132.05ms; mean=2663.62ms; req/s=9.090909090909092; req/q_std=1.38; stdev=838.97; codes.200=100; concurrency=30; requests=100;
2020-01-10 20:37:57,679 - __init__:55 - INFO - d29aa85b9f2b4b90a25fd8c729d1312f - Exiting script... -

This execution show us very nice numbers:

  • Min request duration: 132.7ms
  • Max request duration: 3132.05ms
  • Mean request duration: 2663.62ms
  • Requests per second: 9.09
  • Requests per second standard derivation: 1.38
  • Standard derivation for requests duration: 838.97

We want to reduce this numbers in order to get better results, for our example let’s remove the dummy sleep in order to get better numbers

# ...
@app.route('/')
def hello():
    name = request.args.get("name", "World")
    return 'Hello, {}!'.format(escape(name))

Now, restart the gunicorn service and execute the same test again

aioload -n 100 -c 30 test.ini -v
2020-01-10 20:40:52,873 - __init__:43 - INFO - 7384237ba3f349e6835f388ebbcc7059 - Starting script... -
2020-01-10 20:40:52,875 - runner:23 - INFO - 7384237ba3f349e6835f388ebbcc7059 - preparing_requests -
2020-01-10 20:40:52,876 - runner:28 - INFO - 7384237ba3f349e6835f388ebbcc7059 - prepared_requests -
2020-01-10 20:40:52,877 - runner:89 - INFO - 7384237ba3f349e6835f388ebbcc7059 - starting_requests -
2020-01-10 20:40:53,017 - runner:119 - INFO - 7384237ba3f349e6835f388ebbcc7059 - done - min=27.53ms; max=58.6ms; mean=34.26ms; req/s=50.0; req/q_std=55.15; stdev=8.26; codes.200=100; concurrency=30; requests=100;
2020-01-10 20:40:53,018 - __init__:55 - INFO - 7384237ba3f349e6835f388ebbcc7059 - Exiting script... -

  * Min request duration: **27.53ms**
  * Max request duration: **58.6ms**
  * Mean request duration: **34.26ms**
  * Requests per second: **50.0**
  * Requests per second standard derivation: **55.15**
  * Standard derivation for requests duration: **8.26**

The numbers shows better results, the server can handle more requests per second and takes less time in min, max and mean durations and so on.

aioload can display charts with matplotlib, for this we need to specify the option –plot

python chart with matplotlib

Conclusion

With aioload we could easily do some load testing with Python to our http applications, compare results and chars per execution and optimize our apps.

References