2019-04-17

Testing metrics thoughts and examples: how to turn lights on and off through MQTT with pytest-play

In this article I'll share some personal thoughts about test metrics and talk about some technologies and tools playing around a real example: how to turn lights on and off through MQTT collecting test metrics.

By the way the considerations contained in this article are valid for any system, technology, test strategy and test tools so you can easily integrate your existing automated tests with statsd with a couple of lines of code in any language.

I will use the pytest-play tool in this example so that even non programmers should be able to play with automation collecting metrics because this tool is based on YAML (this way no classes, functions, threads, imports, no compilation, etc) and if Docker is already no installation is needed. You'll need only a bit of command line knowledge and traces of Python expressions like variables["count"] > 0.

Anyway... yes, you can drive telematics/IoT devices with MQTT using pytest-play collecting and visualizing metrics thanks to:
  • statsd, a "Daemon for easy but powerful stats aggregation"
  • Graphite, a statsd compatible "Make it easy to store and graph metrics" solution
or any other statsd capable monitoring engine.

In our example we will see step by step how to:
  • send a command to a device through MQTT (e.g., turn on a fridge light)
  • make assertions against the expected asynchronous response sent back by the device through MQTT (e.g., report light on/off status. In our case we expect a light on status)
  • collect key performance externally observable metrics on a JUnit compatible report file and optionally feed a statsd external metrics/monitoring engine (e.g, track how much time was needed for a round trip command/feedback on the MQTT broker)
using MQTT and pytest-play, using YAML files.

Why test metrics?

"Because we can" (cit. Big Band TheorySeries 01 Episode 09 - The Cooper-Hofstadter Polarization):
Sheldon: Someone in Sezchuan province, China is using his computer to turn our lights on and off.
Penny: Huh, well that’s handy. Um, here's a question... why?!
All together: Because we can!
If the "Because we can" answer doesn't convince your boss, there are several advantages that let you react proactively before something of not expected happens. And to be proactive you need knowledge of you system under test thanks to measurable metrics that let you:
  • know how your system behaves (and confirm where bottlenecks are located)
    • in standard conditions
    • under load
    • under stress
    • long running
    • peak response
    • with a big fat databases
    • simulating a small percentage of bad requests
    • or any other sensible scenario that needs to be covered
  • know under which conditions your users will perceive
    • no performance deterioration
    • a performance deterioration
    • a critical performance deterioration
    • system stuck
  • understand how much time is available before first/critical/blocking performance deterioration will met considering users/application growth trends
so that you can be proactive and:
  • keep your stakeholders informed very valuable information
  • improve your system performance before something of bad will happen
Ouch! The effects of a bad release in action
In addition you can:
  • anticipate test automation failures due to timeouts, maybe you already experienced a test always passing and one day it will start sporadically to exceed your maximum timeout
  • choose more carefully timeouts if there are no specific requirements
  • avoid false alarms like a generic "today the system seems slower". If there is a confirmed problem you might say instead: "compared to previous measurements, the system response is 0.7 s slower today. Systematically."
  • find corner cases. You might notice that the average response time is always pretty the same or slightly higher because there is a particular scenario that systematically produces a hard to discover response time peak compared to similar requests and that might create some integration problems if other components are not robust
  • avoid retesting response times with previous versions comparing to the actual built, because everything has been already tracked
What should you measure? Everything of valuable for you:.
  • API response times
  • time needed for an asynchronous observable effect will happen
  • metrics from a user/business perspective (e.g., it is more important for users API response times, browser first paint or how when she/he can start using a web page?)
  • metadata (browser, versions, etc). Metadata formats non compatible with statsd might be tracked on custom JUnit XML reports
  • pass/skip/error/etc rates
  • deploys
  • etc

Some information about statsd/Graphite and MQTT

statsd/Graphite

Very very interesting readings about statsd and the measure everything approach:

If you are not familiar with statsd and Graphite you can install it (root/root by default):

docker run -d\
 --name graphite\
 --restart=always\
 -p 80:80\
 -p 2003-2004:2003-2004\
 -p 2023-2024:2023-2024\
 -p 8125:8125/udp\
 -p 8126:8126\
 graphiteapp/graphite-statsd

and play with it sending fake metrics using nc:
echo -n "my.metric:320|ms" | nc -u -w0 127.0.0.1 8125
you'll find a new metric aggregations available:
stats.timers.$KEY.mean
stats.timers.$KEY.mean_$PCT
stats.timers.$KEY.upper_$PCT
stats.timers.$KEY.sum_$PCT
...
where:

  • $KEY is my.metric in this example (so metric keys are hierarchical for a better organization!)
  • $PCT is the percentile (e.g., stats.timers.my.metric.upper_90)
More info, options, configurations and metric types here:

What is MQTT?

From http://mqtt.org/:
MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol.
It was designed as an extremely lightweight publish/subscribe messaging transport.
It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.
MQTT is the standard de facto for smarthome/IoT/telematics/embedded devices communications, even on low performance embedded devices, and it
is available on many cloud infrastructures.

Every actor can publish a message for a certain topic and every actor can subscribe to a set of topics, so you get a message for every message of interest.

Topics are hierarchical so that you can subscribe to a very specific or wide range of topics coming from devices or sensors (e.g., /house1/room1/temp, /house1/room1/humidity or all messages related to /house1/room1/ etc).

For example in a telematics application every device will listen to any command or configuration sent by a server component through a MQTT broker (e.g., project1/DEVICE_SN/cmd);
server will be notified for any device response or communication subscribing to a particular topic (e.g., project1/DEVICE_SN/data).
So:
  • you send commands to a particular device publishing messages on foo/bar/DEVICE_SN/cmd 
  • you expect responses subscribing to foo/bar/DEVICE_SN/data.
If you are not confident with MQTT you can install the mosquitto utility and play with the mosquitto_sub and mosquitto_pub commands using with the public broker iot.eclipse.org. For example you can publish a message for a given topic:
$ mosquitto_pub -t foo/bar -h iot.eclipse.org -m "hello pytest-play!"
 and see the response assuming that you previously subscribed to foo/bar (we see all messages sent with mosquitto_pub of our topics of interest here):
$ mosquitto_sub -t foo/bar/# -h iot.eclipse.org -v

Prerequisites

pytest-play is multi platform because it is based on Python (installation might be different for different operative system).
Using Docker instead no installation is required, you need to install Docker and you are ready to start playing with pytest-play without any installation:
As a user you should be confident with a shell and command line options.

Steps

And now let's start with our example.

Create a new folder project

Create a new folder (e.g., fridge) and enter inside.

Create a variables file

Create a env.yml file with the following contents:
pytest-play:
  mqtt_host: YOUR_MQTT_HOST
  mqtt_port: 20602
  mqtt_endpoint: foo/bar
You can have one or more configuration files defining variables for your convenience. Typically you have one configuration file or each target environment (e.g., dev.yml, alpha.yml, etc).

We will use later this file for passing variables thanks to the --variables env.yml command line option, so you can switch environment passing different files.

Create the YML script file

Create a a YAML file called test_light_on.yml inside the fridge folder or any other subfolder if any. Note well: the *.yml extension and test_ prefix matter otherwise the file won't be considered as executable at this time of writing.

If you need to simulate a command or simulate a device activity you need just one command inside your YAML file:
- comment: send light turn ON command
  provider: mqtt
  type: publish
  host: "$mqtt_host"
  port: "$mqtt_port"
  endpoint: "$mqtt_endpoint/$device_serial_number/cmd"
  payload: '{"Payload":"244,1"}'
where 244 stands for the internal ModBUS registry reference for the fridge light and 1 stands for ON (and 0 for OFF).

But... wait a moment. Until now we are only sending a payload to a MQTT broker resolving the mqtt_host variable for a given endpoint and nothing more... pretty the same business you can do with mosquitto_pub, right? You are right! That's why we are about to implement something of more:
  • subscribe to our target topic where the expected response will come and store every single received message to a messages variable (it will contain an array of response payload strings)
  • add an asynchronous waiter waiting for the expected device response
  • once detected the expected response arrived, make some assertions
  • track testing metrics
  • enable support for parametrized scenarios with decoupled test data
  • Jenkins/CI capabilities (not covered in this article, see http://davidemoro.blogspot.com/2018/03/test-automation-python-pytest-jenkins.html)
Put inside our file the following contents inside the test_light_on.yml file and save:
markers:
  - light_on
test_data:
  - device_serial_number: 8931087315095410996
  - device_serial_number: 8931087315095410997
---
- comment: subscribe to device data and store messages to messages variable once received (non blocking subscribe)
  provider: mqtt
  type: subscribe
  host: "$mqtt_host"
  port: "$mqtt_port"
  topic: "$mqtt_endpoint/$device_serial_number"
  name: "messages"
- comment: send light turn ON command
  provider: mqtt
  type: publish
  host: "$mqtt_host"
  port: "$mqtt_port"
  endpoint: "$mqtt_endpoint/$device_serial_number/cmd"
  payload: '{"Payload":"244,1"}'
- comment: start tracking response time (stored in response_time variable)
  provider: metrics
  type: record_elapsed_start
  name: response_time
- comment: wait for a device response
  provider: python
  type: while
  timeout: 12
  expression: 'len(variables["messages"]) == 0'
  poll: 0.1
  sub_commands: []
- command: store elapsed response time in response_time variable
  provider: metrics
  type: record_elapsed_stop
  name: response_time
- comment: assert that status light response was sent by the device
  provider: python
  type: assert
  expression: 'loads(variables["messages"][0])["measure_id"] == [488]'
- comment: assert that status light response was sent by the device with status ON
  provider: python
  type: assert
  expression: 'loads(variables["messages"][0])["bin_value"] == [1]'
Let's comment command by command and section by section the above YAML configuration.

Metadata, markers and decoupled test data

First of all the --- delimiter splits an optional metadata document from the scenario itself. The metadata section in our example contains:
markers:
  - light_on
You can mark your scripts with one or more markers so that you can select which scenario will run from the command line using marker expressions like -m light_off or  something like -m "light_off and not slow" assuming that you have some script marked with the pretend slow marker.

Decoupled test data and parametrization

Assume that you have 2 or more real devices providing different firmware versions always ready to be tested.

In such case we want define our scenario once and it will be executed more thanks to parametrization. Our scenario will be executed for each any item defined in the test_data array in the metadata section. In our example it will be executed twice:
test_data:
  - device_serial_number: 8931087315095410996
  - device_serial_number: 8931087315095410997
If you want you can track different metrics for different serial numbers so that you are able to compare different firmware versions.

Subscribe to topics where we expect a device response

As stated in the official play_mqtt documentation https://github.com/davidemoro/play_mqtt
you can subscribe to one or more topics using the mqtt provider and type: subscribe. You have to provide the where the MQTT broker host lives (e.g., iot.eclipse.org), the port, obviously the topic you want to subscribe (e.g., foo/bar/$device_serial_number/data/light where $device_serial_number will be replaced with what you define in environment configuration files or for each test_data section.
- comment: subscribe to device data and store messages to messages variable once received (non blocking subscribe)
  provider: mqtt
  type: subscribe
  host: "$mqtt_host"
  port: "$mqtt_port"
  topic: "$mqtt_endpoint/$device_serial_number"
  name: "messages"
This is a non blocking call so that while the flow continues, it will collect underground every message published on the topics of our interest storing them to a messages variable.

messages is an array containing all matching messaging coming from MQTT and you can access to the messages value in expressions with variables["messages"].

Publish a command

This is self explaining (you can send any payload, even dynamic/parametrized payloads):
- comment: send light turn ON command
  provider: mqtt
  type: publish
  host: "$mqtt_host"
  port: "$mqtt_port"
  endpoint: "$mqtt_endpoint/$device_serial_number/cmd"
  payload: '{"Payload":"244,1"}'
where 244 is the internal reference and 1 stands for ON.

Track time metrics

This command let you start tracking time from now until a record_elapsed_stop will be executed:
- comment: start tracking response time (stored in response_time variable)
  provider: metrics
  type: record_elapsed_start
  name: response_time
... <one or more commands or asynchronous waiters here>
- command: store elapsed response time in response_time variable
  provider: metrics
  type: record_elapsed_stop
  name: response_time
The time metric will be available under a variable name called in our example response_time (from name: response_time). For a full set of metrics related commands and options see https://github.com/pytest-dev/pytest-play.

You can record key metrics of any type for several reasons:
  • make assertions about some expected timings
  • report key performance metrics or properties in custom JUnit XML reports (in conjunction with the command line option --junit-xml results.xml for example so that you have an historical trend of metrics for each past or present test execution)
  • report key performance metrics on statsd capable third party systems (in conjunction with the command line option --stats-d [--stats-prefix play --stats-host http://myserver.com --stats-port 3000])

While

Here we are waiting for a message response was collected and stored to the messages variable (do you remember the already discussed MQTT subscribe command in charge of collecting/storing messages of interest?):
- comment: wait for a device response
  provider: python
  type: while
  timeout: 12
  expression: 'len(variables["messages"]) == 0'
  poll: 0.1
  sub_commands: []
You can specify a timeout (e.g., timeout: 12), a poll time (how many wait seconds between a while iteration, in such case poll: 0.1) and an optional list of while's sub commands (not needed for this example).

When the expression returns a true-ish value, the while command exits.

Does your device publish different kind of data on the same topic? Modify the while expression restricting to the messages of your interest, for example:
- comment: [4] wait for the expected device response
  provider: python
  type: while
  timeout: 12
  expression: 'len([item for item in variables["messages"] if loads(item)["measure_id"] == [124]]) == 0'
  poll: 0.1
  sub_commands: []
In the above example we are iterating over our array obtaining only the entries with a given measure_id where the loads is a builtin JSON parse (python's json.loads).
<?xml version="1.0" encoding="utf-8"?><testsuite errors="0" failures="0" name="pytest" skipped="0" tests="1" time="10.664"><testcase classname="test_on.yml" file="test_on.yml" name="test_on.yml[test_data0]" time="10.477"><properties><property name="response_time" value="7.850502967834473"/></properties><system-out>...

Assertions

And now it's assertions time:
- comment: assert that status light response was sent by the device
  provider: python
  type: assert
  expression: 'loads(variables["messages"][0])["measure_id"] == [488]'
- comment: assert that status light response was sent by the device with status ON
  provider: python
  type: assert
  expression: 'loads(variables["messages"][0])["bin_value"] == [1]'
Remember that the messages variables is an array of string messages? We are taking the first message (with variables["messages"][0] you get the first raw payload), parse the JSON payload so that assertions will be simpler (in our case loads(variables["messages"][0]) for sake of completeness) obtaining a dictionary and then assert that we have the expected values under certain dictionary keys.

As you can see pytest-play is not 100% codeless by design because it requires a very basic Python expressions knowledge, for example:
  • variables["something"] == 0
  • variables["something"] != 5
  • not variables["something"]
  • variables["a_boolean"] is True
  • variables["a_boolean"] is False
  • variables["something"] == "yet another value"
  • variables["response"]["status"] == "OK" and not variables["response"]["error_message"]
  • "VALUE" in variables["another_value"]
  • len([item for item in variables["mylist"] if item > 0) == 0
  • variables["a_string"].startswith("foo")
One line protected Python-based expressions let you express any kind of waiters/assertions without having the extend the framework's commands syntax introducing an exotic YAML-based meta language that will never be able to express all the possible use cases. The basic idea behind Python expressions is that even for non programmers it is easier to learn the basics of Python assertions instead of trying to figure out how to express assertions in an obscure meta language.

pytest-play is not related to MQTT only, it let you write actions and assertions against a real browser with Selenium, API/REST, websockets and more.

So if you have to automate a task for a device simulator, a device driver, some simple API calls with assertions, asynchronous wait for a condition is met with timeouts or interact with browsers, cross technology actions (e.g., publish a MQTT message and poll a HTTP response until something happens) and decoupled test data parametrization... even if you are not a programmer because you don't have to deal with imports, function or class definitions and it is always available if you have Docker installed.


And now you can show off with shining metrics!

Run your scenario

And finally, assuming that you are already inside your project folder, let's run our scenario using Docker (remember --network="host" if you want to send metrics to a server listening on localhost):
docker run --rm -it -v $(pwd):/src --network="host" davidemoro/pytest-play --variables env.yml --junit-xml results.xml --stats-d --stats-prefix play test_light_on.yml
The previous command will run our scenario printing the results and if there is a stats server listening on localhost metrics will be collected and you will be able to create live dashboards like the following one:
statsd/Graphene response time dashboard
and metrics are stored in the results.xml file too:
<?xml version="1.0" encoding="utf-8"?><testsuite errors="0" failures="0" name="pytest" skipped="0" tests="1" time="10.664"><testcase classname="test_on.yml" file="test_on.yml" name="test_on.yml[test_data0]" time="10.477"><properties><property name="response_time" value="7.850502967834473"/></properties><system-out>...

Sum up

This was a very long article and we talked about a lot of technologies and tools. So if you are not yet familiar with some tools or technologies it's time to read some documentation and play with some hello world examples:

Any feedback is welcome!

Do you like pytest-play?

Let's get in touch for any suggestion, contribution or comments. Contributions will be very appreciated too!

2019-03-20

CSS selectors guidelines for smooth test automation with Selenium

Mind exploding trying to locate a DOM element and after 20 minutes you get a selector like that?
.elem__container_search div:nth-child(1) .row1 tr:nth-child(5) input:nth-child(2)
or even worse you have to switch to xpath? And after a while you realize it is not correct spending more time? Or someone changed input elements order and your tests are broken?

You should be able to address any important application element in a page with no pain in seconds with a CSS selector: .last-update is better than .row1 tr:nth-child(5) input:nth-child(2), do you agree?

If your answer is yes, this article if for you!

Do not blame developers, provide guidelines and examples

First of all do not blame developers for poor CSS selectors, provide guidelines and examples instead! Nowadays is even more important providing guidelines because with some Javascript frameworks you might see cryptic, sometimes random (!!!), codes as classes and ids not usable for our purposes.

Test robustness, test automation development and maintenance time are affected by poor selectors

In other terms a bad page design affects productivity:
  • you should locate an element in seconds instead of minutes (.last-update vs .row1 tr:nth-child(5) input:nth-child(2))
  • the .row1 tr:nth-child(5) input:nth-child(2) selector might not be valid for users with different roles while .last-update will be always valid
  • if the elements order change, no broken tests and .last-update will remain still valid (with xpath or selectors like .row1 tr:nth-child(5) input:nth-child(2) you will get broken tests)
... so less time wasted and more money with good conventions.

Define frontend development conventions together and follow them


If you are using a framework or a CMS probably there will be already a good selectors guideline so you can make it yours if it fits your needs, extend it if needed and contribute back with improvements.

You can write examples for common use case so that everyone speaks the same language. E.g., define a naming convention for elements with status class="wifi wifi--state-on" / class="wifi wifi--state-off".

Alternatively you can adopt an existing and possibly widely used existing convention like BEM (Block Element Modifier - a methodology that provides a naming convention for frontend developers):
or any other similar initiative.

Final thoughts

Sometimes you don't control the web application you are testing so you need to explain what you are expecting and why it is important communicating the improvements in terms of better test automation robustness and development time.



It is also useful measuring the time wasted due to bad practice so that you might support better your proposal reporting to the management real numbers in terms of saved/lost money.

So my final suggestion is to find a solution together so that every future page will follow agreed guidelines according to existing best practice if possible.

And as a developer always keep in mind the following questions as double check:
  • am I able to locate this element with no pain?
  • am I able to understand the status of this element?

2019-02-13

Turn any program that uses STDIN/STDOUT into a WebSocket server using websocketd

Yesterday I tried for the first time websocketd and it is amazing!
This is a little gem created by @joewalnes that let you implement language agnostic WebSocket applications based on any command line program reading text from stdin and writing a message output to stdout. A simple (and genial) approach.

If think that it is perfect for testing too, so I updated an existing integration test for a WebSocket client  of mine using a local websocketd server on TravisCI instead of relying on external public services requiring internet as done before (wss://echo.websocket.org). This activity was already on my roadmap so why not try out websocketd?!

As you can see here (https://github.com/davidemoro/pytest-play-docker/pull/42/files): in .travis.yml I added a before_script calling a travis/setup_websocket.sh script that installs and runs websocketd in background on port 8081 based on a simple travis/echo_ws.sh that reads a line from stdin echoing to stdout.

The websocketd syntax is the following:

./websocketd --port=8081 ./echo_ws.sh

were echo_ws.sh can be any executable stdin/stdout based. More details in next section.

Wanna tryout websocketd?

Download a websocketd version compatible with your OS/architecture and unzip the folder from http://websocketd.com/#download: it contains a websocketd executable ready to be used so no installation needed and follow the tutorials described in https://github.com/joewalnes/websocketd/wiki.

Alternatively you can test websocketd using pytest-play's play_websocket plugin that is ready to be used assuming that you have Docker installed.
docker run --rm -it -v $(pwd):/src --network host davidemoro/pytest-play --variables variables.yml
Additional links you might find useful for this example:

2019-02-12

Setting up Cassandra database service on TravisCI


At this time of writing TravisCI says that if you want to run a Cassandra service you have to add a cassandra service according to https://docs.travis-ci.com/user/database-setup/#cassandra:
services:
  - cassandra
but if you try to initialize cassandra you might find out that cassandra is not yet ready or running depending on timing.

The solution is:
And now cassandra is ready to be used in your tests (for example https://github.com/davidemoro/pytest-play-docker/blob/master/tests/test_cassandra.yml . In this case I'm using using plain yml files thanks to pytest-play)

2019-02-09

High quality automated docker hub push using Github, TravisCI and pyup for Python tool distributions

UPDATE 20190611: the article contents are still valid but I definitively switched from pyup to the requires.io service. Why? There were an unexpected error on my pyup project and the support never sent any kind of feedback for weeks (is the project still alive?!). Anyway I happily switched to requires.io: I like the way they manage cumulative pull requests with errored packages and the support is responsive.

Let's say you want to distribute a Python tool with docker using known good dependency versions ready to be used by end users... In this article you will see how to continuously keeping up to date a Docker Hub container with minimal managing effort (because I'm a lazy guy) using github, TravisCI and pyup.

The goal was to reduce as much as possible any manual activity for updates, check all works fine before pushing, minimize build times and keep docker container always secure and updated with a final high quality confidence.

As an example let's see what happens under the hood behind every pytest-play Docker Hub update on the official container https://cloud.docker.com/u/davidemoro/repository/docker/davidemoro/pytest-play (by the way if you are a pytest-play user: did you know that you can use Docker for running pytest-play and that there is a docker container ready to be used on Docker Hub? See a complete and working example here https://davidemoro.blogspot.com/2019/02/api-rest-testing-pytest-play-yaml-chuck-norris.html)

Repositories

The docker build/publish stuff lives on another repository, so https://github.com/davidemoro/pytest-play-docker is the repository that implements the Docker releasing workflow for https://github.com/pytest-dev/pytest-play on Docker Hub (https://hub.docker.com/r/davidemoro/pytest-play).

Workflow

This is the highly automated workflow at this time of writing for the pytest-play publishing on Docker Hub:
All tests executions run against the docker build so there is a warranty that what is pushed to Docker Hub works fine (it doesn't check only that the build was successful but it runs integration tests against the docker build), so no versions incompatibilities, no integration issues between all the integrated third party pytest-play plugins and no issues due to the operative system integration (e.g., I recently experienced an issue on alpine linux with a pip install psycopg2-binary that apparently worked fine but if you try to import psycopg2 inside your code you get an unexpected import error due to a recent issue reported here https://github.com/psycopg/psycopg2/issues/684).

So now every time you run a command like the following one (see a complete and working example here https://davidemoro.blogspot.com/2019/02/api-rest-testing-pytest-play-yaml-chuck-norris.html):
docker run --rm -v $(pwd):/src davidemoro/pytest-play
you know what was the workflow for every automated docker push for pytest-play.

Acknowledgements

Many thanks to Andrea Ratto for the 10 minutes travis build speedup due to Docker cache, from ~11 minutes to ~1 minute is a huge improvement indeed! It was possible thanks to the docker pull davidemoro/pytest-play command, the build with the --cache-from davidemoro/pytest-play option and running the longest steps in a separate and cacheable step (e.g., the very very long cassandra-driver compilation moved to requirements_cassandra.txt will be executed only if necessary).

Relevant technical details about pytest-play-docker follows (some minor optimizations are still possible saving in terms of final size).


Feedback

Any feedback will be always appreciated.

Do you like the Docker hub push process for pytest-play? Let me know becoming a pytest-play stargazer! Star

2019-02-02

API/REST testing like Chuck Norris with pytest play using YAML


In this article we will see how to write HTTP API tests with pytest using YAML files thanks to pytest-play >= 2.0.0 (pytest-play provides support for Selenium, MQTT, SQL and more. See third party pytest-play plugins).

The guest star is Chuck Norris thanks to the public JSON endpoint available at https://api.chucknorris.io/ so you will be able to run your test by your own following this example.

Obviously this is a joke because Chuck Norris cannot fail so tests are not needed.

Prerequisites and installation

Installation is not needed, the only prerequisite is Docker thanks to https://hub.docker.com/r/davidemoro/pytest-play.

Inside the above link you'll find the instructions needed for installing Docker for any platform.

If you want to run this example without docker install pytest-play with the external plugin play_requests based on the fantastic requests library (play_requests is already included in docker container).

Project structure

You need:
  • a folder (e.g., chuck-norris-api-test)
  • one or more test_XXX.yml files containing your steps (test_ and .yml extension matter)
For example:

As you can see each scenario will be repeated for any item you provide in test_data structure.

The first example asserts that the categories list contains some values against this endpoint https://api.chucknorris.io/jokes/categories; the second example shows how to search for category (probably Chuck Norris will find you according to this Chuck Norris fact "You don't find Chuck Norris, Chuck Norris finds you!")

Alternatively you can checkout this folder:
 Documentation and options:

Usage

Visit the project folder and run the following command line command:

docker run --rm -v $(pwd):/src davidemoro/pytest-play


You can append extra standard pytest variables like -x, --pdb and so on. See  https://docs.pytest.org/en/latest/

Homeworks

It's time to show off with a GET roundhouse kick! Ping me on twitter @davidemoro sharing your pytest-play implementation against the random Chuck Norris fact generator by category!

GET https://api.chucknorris.io/jokes/random?category=dev
{
    "category": ["dev"],
    "icon_url": "https:\/\/assets.chucknorris.host\/img\/avatar\/chuck-norris.png",
    "id": "yrvjrpx3t4qxqmowpyvxbq",
    "url": "https:\/\/api.chucknorris.io\/jokes\/yrvjrpx3t4qxqmowpyvxbq",
    "value": "Chuck Norris protocol design method has no status, requests or responses, only commands."
}

Do you like pytest-play?

Let's get in touch for any suggestion, contribution or comments. Contributions will be very appreciated too!
Star