In this article I'll share some personal thoughts about Test Automation Frameworks; you can take inspiration from them if you are going to evaluate different test automation platforms or assess your current test automation solution (or solutions).
Despite it is a generic article about test automation, you'll find many examples explaining how to address some common needs using the Python based test framework named pytest and the Jenkins automation server: use the information contained here just as a comparison and feel free to comment sharing alternative methods or ideas coming from different worlds.
It contains references to some well (or less) known pytest plugins or testing libraries too.
Before talking about automation and test automation framework features and characteristics let me introduce the most important test automation goal you should always keep in mind.
Test automation goals: ROI
You invest in automation for a future return of investment.
Simpler approaches let you start more quickly but in the long term
they don't perform well in terms of ROI and vice versa. In addition the initial complexity due to a higher level of abstraction may produce better results in the medium or long term: better ROI and some benefits for non technical testers too. Have a look at the test automation engineer ISTQB certification syllabus for more information:
So what I mean is that test automation is not easy: it doesn't mean just recording some actions or write some automated test procedures because
how you decide to automate things affects the
ROI. Your test automation strategy should consider your tester technical skills now and future evolutions, considerations about how to improve your system testability (is your software testable?), good test design and architecture/system/domain knowledge. In other words be aware of vendors selling "silver bullet" solutions promising smooth test automation for everyone, especially rec&play solutions: there are no silver bullets.
Test automation solution features and characteristics
A test automation solution should be enough generic and flexible, otherwise there is the risk of having to adopt different and maybe incompatible tools for different
kind of tests. Try to imagine the mess of having the following situation: one tool or commercial service for browser based tests only based on rec&play, one tool for API testing only, performance test frameworks that doesn't let you reuse existing scenarios, one tool for BDD only scenarios, different Jenkins jobs with different settings for each different tool, no test management tool integration, etc. A unique solution, if possible, would be better: something that let you choose the level of abstraction and that doesn't force you. Something that let you start simple and that follow your future needs and the skill evolution of your testers.
That's one of the reasons why I prefer pytest over an hyper specialized solution like
behave for example: if you combine
pytest+
pytest-bdd you can write BDD scenarios too and you are not forced to use a BDD only capable test framework (without having the pytest flexibility and tons of additional plugins).
And now, after this preamble, an unordered list of features or characteristics that you may consider for your test automation solution software selection:
- fine grained test selection mechanism that allows to be very selective when you have to choose which tests you are going to launch
- parametrization
- high reuse
- test execution logs easy to read and analyze
- easy target environment switch
- block on first failure
- repeat your tests for a given amount of times
- repeat your tests until a failure occurs
- support parallel executions
- provide integration with third party software like test management tools
- integration with cloud services or browser grids
- execute tests in debug mode or with different log verbosity
- support random tests execution order (the order should be reproducible if some problems occur thanks to a random seed if needed)
- versioning support
- integration with external metrics engine collectors
- support different levels of abstraction (e.g., keyword driven testing, BDD, etc)
- rerun last failed
- integration with platforms that let you test against a large combination of OS and browsers if needed
- are you able to extend your solution writing or installing third party plugins?
Typically a test automation engineer will be able to drive automated test runs using the framework command line interface (CLI) during test development but you'll find out very soon that you need an automation server for long running tests, scheduled builds, CI and here it comes Jenkins. Jenkins could be used by non technical testers for launching test runs or initialize an environment with some test data.
Jenkins
What is Jenkins? From the Jenkins website:
Continuous Integration and Continuous Delivery. As an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.
So thanks to Jenkins everyone can launch a parametrized automated test session just using a browser: no command line and nothing installed on your personal computer. So more power to non technical users thanks to Jenkins!
With Jenkins you can easily schedule recurrent automatic test runs, start remotely via external software some parametrized test runs, implement a CI and many other things. In addition as we will see Jenkins is quite easy to configure and manage thanks to through the web configuration and/or Jenkins pipelines.
Basically Jenkins is very good at starting builds and generally jobs. In this case Jenkins will be in charge of launching our parametrized automated test runs.
And now let's talk a little bit of Python and the pytest test framework.
Python for testing
I don't know if there are some articles talking about statistics on the net about the correlation between Test Automation Engineer job offers and the Python programming language, with a comparison between other programming languages. If you find a similar resource share with me please!
My personal feeling observing for a while many Test Automation Engineer job offers (or any similar QA job with some automation flavor) is that the Python word is very common. Most of times is one of the nice to have requirements and other times is mandatory.
Let's see why the programming language of choice for many QA departments is Python, even for companies that are not using Python for building their product or solutions.
Why Python for testing
Why Python is becoming so popular for test automation? Probably because it is more affordable for people with no or little programming knowledge compared to other languages. In addition the Python community is very supportive and friendly especially with new comers, so if you are planning to attend any Python conference be prepared to fall in love with this fantastic community and make new friends (friends, not only connections!). For example at this time of writing you are still in time for attending
PyCon Nove 2018 in the beautiful
Florence (even better if you like history, good wine, good food and meet great people):
You can just compare the most classical hello world, for example with Java:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
and compare it with the Python version now:
print("Hello, World!")
Do you see any difference? If you are trying to explain to a non programmer how to print a line in the terminal window with Java you'll have to introduce public, static, void, class, System, installing a runtime environment choosing from different version, installing an IDE, running javac, etc and only at the end you will be able to see something printed on the screen. With Python, most of times it comes preinstalled in many distributions, you just focus on what to need to do. Requirements: a text editor and Python installed. If you are not experienced you start with a simple approach and later you can progressively learn more advanced testing approaches.
And what about test assertions? Compare for example a Javascript based assertions:
expect(b).not.toEqual(c);
with the Python version:
assert b != c
So no
expect(a).not.toBeLessThan(b),
expect(c >= d).toBeTruthy() or
expect(e).toBeLessThan(f): with Python you just say assert a >= 0 so nothing to remember for assertions!
Python is a big fat and very powerful programming language but it follows a "
pay only for what you eat" approach.
Why pytest
If Python is the language of your choice you should consider the pytest framework and
its high quality community
plugins and I think it is a good starting point for building your own test automation solution.
The pytest framework (
https://docs.pytest.org/en/latest/) makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.
Most important pytest features:
- simple assertions instead of inventing assertion APIs (.not.toEqual or self.assert*)
- auto discovery test modules and functions
- effective CLI for controlling what is going to be executed or skipped using expressions
- fixtures, easy to manage fixtures lifecycle for long-lived test resources and parametrized features make it easy and funny implementing what you found hard and boring with other frameworks
- fixtures as function arguments, a dependency injection mechanism for test resources
- overriding fixtures at various levels
- framework customizations thanks to pluggable hooks
- very large third party plugins ecosystem
I strongly suggest to have a look at the pytest documentation but I'd like to make some examples showing something about fixtures, code reuse, test parametrization and improved maintainability of your tests. If you are not a technical reader you can skip this section.
I'm trying to explain fixtures with practical examples based on answers and questions:
- When should be created a new instance of our test resource?
You can do that with the fixture scope (session, module, class, function or more advanced options like autouse). Session means that your test resource will live for the entire session, module/class for all the tests contained in that module or class, with function you'll have an always fresh instance of your test resource for each test
- How can I determine some teardown actions at the end of the test resource life?
You can add a sort of fixture finalizer after the yield line that will be invoked at the end of our test resource lifecycle. For example you can close a connection, wipe out some data, etc.
- How can I execute all my existing tests using that fixture as many as your fixture configurations?
You can do that with params. For example you can reuse all your existing tests verifying the integration with different real databases, smtp servers. Or if you have the web application offering the same features deployed with a different look&feel for different brands you can reuse all your existing functional UI tests thanks to pytest's fixture parametrization and a page objects pattern where for different look&feel I don't mean only different CSS but different UI components (e.g. completely different datetime widgets or navigation menu), components disposition in page, etc.
- How can I decouple test implementation and test data? Thanks to parametrize you can decouple them and write just one time your test implementation. Your test will be executed as many times as your different test data
Here you can see an example of fixture parametrization (the
test_smtp will be executed twice because you have 2 different fixture configurations):
import pytest
import smtplib
@pytest.fixture(scope="module",
params=["smtp1.com", "smtp2.org"])
def smtp(request):
smtp = smtplib.SMTP(request.param, 587, timeout=5)
yield smtp
print("finalizing %s" % smtp)
smtp.close()
def test_smtp(smtp):
# use smtp fixture (e.g., smtp.sendmail(...))
# and make some assertions.
# The same test will be executed twice (2 different params)
...
And now an example of test parametrization:
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42), ])
def test_eval(test_input, expected):
assert eval(test_input) == expected
For more info see:
This is only pytest, as we will see there are many pytest plugins that extend the pytest core features.
Pytest plugins
There are hundreds of pytest plugins, the ones I am using more frequently are:
- pytest-bdd, BDD library for the pytest runner
- pytest-variables, plugin for pytest that provides variables to tests/fixtures as a dictionary via a file specified on the command line
- pytest-html, plugin for generating HTML reports for pytest results
- pytest-selenium, plugin for running Selenium with pytest
- pytest-splinter, a pytest-selenium alternative based on Splinter. pPytest splinter and selenium integration for anyone interested in browser interaction in tests
- pytest-xdist, a py.test plugin for test parallelization, distributed testing and loop-on-failures testing modes
- pytest-testrail, pytest plugin for creating TestRail runs and adding results on the TestRail test management tool
- pytest-randomly, a pytest plugin to randomly order tests and control random seed
(but there are different random order plugins if you search for "pytest random")
- pytest-repeat, plugin for pytest that makes it easy to repeat a single test, or multiple tests, a specific number of times. You can repeat a test or group of tests until a failure occurs
- pytest-play, an experimental
rec&play
pytest plugin that let you execute a set of actions and assertions
using commands serialized in JSON format. Makes test automation more
affordable for non programmers or non Python programmers for browser,
functional, API, integration or system testing thanks to its pluggable
architecture and many plugins that let you interact with the most common
databases and systems. It provides also some facilitations for writing browser UI actions (e.g., implicit waits before interacting with an input element) and asynchronous checks (e.g., wait until a certain condition is true)
Python libraries for testing:
- PyPOM, python page object model for Selenium or Splinter
- pypom_form, a PyPOM abstraction that extends the page object model applied to forms thanks to declarative form schemas
Scaffolding tools:
- cookiecutter-qa, generates a test automation project ready to be integrated with Jenkins and with the test management tool TestRail that provides working hello world examples. It is shipped with all the above plugins and it provides examples based on raw splinter/selenium calls, a BDD example and a pytest-play example
- cookiecutter-performance, generate a tox based environment based on Taurus bzt for performance test. BlazeMeter ready for distributed/cloud performance tests. Thanks to the bzt/taurus pytest executor you will be able to reuse all your pytest based automated tests for performance tests
Pytest + Jenkins together
We've discussed about Python, pytest and Jenkins main ingredients for our cocktail recipe (shaken, not stirred). Optional ingredients: integration with external test management tools and selenium grid providers.
Thanks to pytest and its plugins you have a rich command line interface (CLI); with Jenkins you can schedule automated builds, setup a CI, let not technical users or other stakeholders executing parametrized test runs or building test always fresh test data on the fly for manual testing, etc. You just need a browser, nothing installed on your computer.
Here you can see how our recipe looks like:
Now lets comment all our features provided by the Jenkins "build with parameters" graphical interface, explaining option by option when and why they are useful.
Target environment (ENVIRONMENT)
In this article we are not talking about regular unit tests, the basis for your testing pyramid. Instead we are talking about system, functional, API, integration, performance tests to be launched against a particular instance of an integrated system (e.g., dev, alpha or beta environments).
You know, unit
tests are good they are not sufficient: it is
important to verify if the integrated system (sometimes different
complex systems developed by different teams under the same or third party organizations) works fine as it is supposed to do. It is
important because it might happen that 100% unit tested systems doesn't
play well after the integration for many different reasons. So with unit
tests you take care about your code quality, with higher test levels
you take care about your product quality. Thanks to these tests you can
confirm an expected product behavior or criticize your product.
So thanks to the
ENVIRONMENT option you will be able to choose one of the target environments. It is important to be able to reuse all your tests and launch them against different environments without having to change your testware code. Under the hood the pytest launcher will be able to switch between different environments thanks to the
pytest-variables parametrization using the
--variables command line option, where each available option in the
ENVIRONMENT select element is bound to a variables files (e.g., DEV.yml, ALPHA.yml, etc) containing what the testware needs to know about the target environment.
Generally speaking you should be able to reuse your tests without any modification thanks to a parametrization mechanism.If your test framework doesn't let you change target environment and it forces you to modify your code, change framework.
Browser settings (BROWSER)
This option makes sense only if you are going to launch browser based tests otherwise it will be ignored for other type of tests (e.g., API or integration tests).
You should be able to select a particular version of browser (latest or a specific version) if any of your tests require a real browser (not needed for API tests just for making one example) and preferably you should be able to integrate with a cloud system that allows you to use any combination of real browsers and OS systems (not only a minimal subset of versions and only Firefox and Chrome like several test platforms online do). Thanks to the
BROWSER option you can choose which browser and version use for your browser based tests. Under the hood the pytest launcher will use the
--variables command line option provided by the
pytest-variables plugin, where each option is bound to a file containing the browser type, version and capabilities (e.g.,
FIREFOX.yml,
FIREFOX-xy.yml, etc). Thanks to pytest, or any other code based testing framework, you will be able to combine browser interactions with non browser actions or assertions.
A lot of big fat warnings about rec&play online platforms for browser testing or if you want to implement your testing strategy using only or too many browser based tests. You shouldn't consider only if they provide a wide range of OS and versions, the most common browsers. They should let you perform also non browser based actions or assertions (interaction with queues, database interaction, http POST/PUT/etc calls, etc). What I mean is that sometimes only a browser is not sufficient for testing your system: it might be good for a CMS but if you are testing an IoT platform you don't have enough control and you will write completely useless tests or low value tests (e.g., pure UI checks instead of testing reactive side effects depending on eternal triggers, reports, device activity simulations causing some effects on the web platform under test, etc).
In addition be aware that some browser based online testing platforms doesn't use Selenium for their browser automation engine under the hood. For example during a software selection I found an online platform using some Javascript injection for implementing user actions interaction inside the browser and this might be very dangerous. For example let's consider a login page that takes a while before the input elements become ready for accepting the user input when some conditions are met. If for some reasons a bug will never unlock the disabled login form behind a spinner icon, your users won't be able to login to that platform. Using Selenium you'll get a failing result in case of failure due to a timeout error (the test will wait for elements won't never be ready to interact with and after few seconds it will raise an exception) and it's absolutely correct. Using that platform the test was green because under the hood the input element interaction was implemented using DOM actions with the final result of having all your users stuck: how can you trust such platform?
OS settings (OS)
This option is useful for browser based tests too. Many Selenium grid vendors provide real browser on real OS systems and you can choose the desired combination of versions.
Resolution settings (RESOLUTION)
Same for the above options, many vendor solutions let you choose the desired screen resolution for automated browser based testing sessions.
Select tests by names expressions (KEYWORDS)
Pytest let you select the tests you are going to launch selecting a subset of tests that matches a pattern language based on test and module names.
For example I find very useful to add the test management tool reference in test names, this way you will be able to launch exactly just that test:
c93466
Or for example all test names containing the login word but not c92411:
login and not c92411
Or if you organize your tests in different modules you can just specify the folder name and you'll select all the tests that live under that module:
api
Under the hood the pytest command will be launched with
-k "EXPRESSION", for example
-k "c93466"
It is used in combination with markers, a sort of test tags.
Select tests to be executed by tag expressions (MARKERS)
Markers can be used alone or in conjunction with keyword expressions. They are a sort of tag expression that let you select just the minimum set of tests for your test run.
Under the hood the pytest launcher uses the command line syntax
-m "EXPRESSION".
For example you can see a marker expression that selects all tests marked with the
edit tag excluding the ones marked with
CANBusProfileEdit:
edit and not CANBusProfileEdit
Or execute only edit negative tests:
edit and negative
Or all integration tests
integration
It's up to you creating granular keywords for features and all you need for select your tests (e.g., functional, integration, fast, negative, ci, etc).
Test management tool integration (TESTRAIL_ENABLE)
All my tests are decorated with the test case identifier provided by the test management tool, in my company we are using
TestRail.
If this option is enabled the test results of executed tests will be reported in the test management tool.
Implemented using the
pytest-testrail plugin.
Enable debug mode (DEBUG)
The debug mode enables verbose logging.
In addition for browser based tests open selenium grid sessions activating debug capabilities options (
https://www.browserstack.com/automate/capabilities). For example verbose browser console logs, video recordings, screenshots for each step, etc. In my company we are using a local installation of
Zalenium and
BrowserStack automate.
Block on first failure (BLOCK_FIRST_FAILURE)
This option is very useful for the following needs:
- a new build was deployed and you want to stop on the very first failure for a subset of sanity/smoke tests
- you are launching repeated, long running, parallel tests and you want to block on first failure
The first usage let you gain confidence with a new build and you want to stop on the very first failure for analyzing what happened.
The second usage is very helpful for:
- random problems (playing with number of repeated executions, random order and parallelism you can increase the probability of reproducing a random problem in less time)
- memory leaks
- testing system robustness, you can stimulate your system running some integration tests sequentially and then augment the parallelism level until your local computer is able to sustain the load. For example launching 24+ parallel integration tests on a simple laptop with pytest running on virtual machine is still fine. If you need something more heavy you can use distribuited pytest-xdist sessions or scale more with BlazeMeter
As you can imagine you may combine this option with
COUNT,
PARALLEL_SESSIONS,
RANDOM_ENABLE and
DEBUG depending on your needs. You can test your tests robustness too.
Under the hood implemented using the pytest's
-x option.
Parallel test executions (PARALLEL_SESSIONS)
Under the hood implemented with pytest-xdist's command line option called
-n NUM and let you execute your tests with the desired parallelism level.
pytest-xdist is very powerful and provides more advanced options and network distributed executions. See
https://github.com/pytest-dev/pytest-xdist for further options.
Switch from different selenium grid providers (SELENIUM_GRID_URL)
For browser based testing by default your tests will be launched on a remote grid URL. If you don't touch this option the default grid will be used (a local
Zalenium or any other provider) but in case of need you can easily switch provider without having to change nothing in your testware.
If you want you can save money maintaining and using a local Zalenium as default option; Zalenium can be configured as a selenium grid router that will dispatch capabilities that it is not able to satisfy. This way you will be able to save money and augment a little bit the parallelism level without having to change plan.
Repeat test execution for a given amount of times (COUNT)
Already discussed before, often used in conjunction with
BLOCK_FIRST_FAILURE (pytest core
-x option)
If you are trying to diagnose an intermittent failure, it can be useful to run the same test or group of tests over and over again until you get a failure. You can use py.test's
-x option in conjunction with pytest-repeat to force the test runner to stop at the first failure.
Based on pytest-repeat's
--count=COUNT command line option.
Enable random test ordering execution (RANDOM_ENABLE)
This option enables random test execution order.
At the moment I'm using the
pytest-randomly plugin but there are 3 or 4 similar alternatives I have to try out.
By randomly ordering the tests, the risk of surprising inter-test dependencies is reduced.
Specify a random seed (RANDOM_SEED)
If you get a failure executing a random test, it should be possible to reproduce systematically rerunning the same tests order with the same test data.
Always from the
pytest-randomly readme:
By resetting the random seed to a repeatable number for each test, tests can
create data based on random numbers and yet remain repeatable, for example
factory boy’s fuzzy values. This is good for ensuring that tests specify the
data they need and that the tested system is not affected by any data that is
filled in randomly due to not being specified.
Play option (PLAY)
This option will be discussed in a dedicated blog post I am going to write.
Basically you are able to paste a JSON serialization of actions and assertions and the pytest runner will be able to execute your test procedure.
You need just a computer with a browser for running any test (API, integration, system, UI, etc). You can paste how to reproduce a bug on a JIRA bug and everyone will be able to paste it on the Jenkins build with parameters form.
See
pytest-play for further information.
If you are going to attending next Pycon in Florence don't miss the following
pytest-play talk presented by
Serena Martinetti:
UPDATES:
How to create a pytest project
If you are a little bit curious about how to install pytest or create
a pytest runner with Jenkins you can have a look at the following
scaffolding tool:
It provides a hello world example that let you start with the test technique more suitable for you: plain selenium scripts, BDD or pytest-play JSON test procedures. If you want you can install page objects library. So you can create a QA project in minutes.
Your QA project will be shipped with a
Jenkinsfile file that requires a
tox-py36 docker executor that provides a
python3.6 environment with
tox already installed; unfortunately
tox-py36 is not yet public so you should implement it by your own at the moment.
Once you provide a
tox-py36 docker executor the Jenkinsfile will create for you the build with parameters Jenkins form for you automatically on the very first Jenkins build for your project.
Conclusions
I hope you'll find some useful information in this article: nice to have features for test frameworks or platform, a little bit of curiosity for the Python world or new pytest plugin you never heard about.
Feedback and contributions are always welcome.
Tweets about test automation and new articles happens here: