Back to blog

Creating Maintainable Automated Tests - Part 2


In part 1 of this series, I discussed why maintainable automated tests are important in software development. I began detailing some ways in which we can achieve maintable tests without getting too technical which included; getting the team involved in testing and, choosing 3rd party libraries wisely. Part 2 continues this discussion with a few more ways maintainability can be improved.

Programming principles

A more well-known approach to maintainability is to ensure the test solution meets common programming principles that were created specifically for creating better and more maintainable code.
• DRY – Don’t Repeat Yourself
o This is the big one. It is bad practice to duplicate code and there are very few things that make code less maintainable. Obviously making a change in one place, is more maintainable than having to update multiple occurrences of the same function.
• YAGNI – You Ain’t Gonna Need It
o If you aren’t going to use it, don’t include it in. It is unnecessary code and will just cause confusion for anyone maintaining the code in the future.
• KISS – Keep It Simple, Stupid
o Don’t overcomplicate. Make the tests easy to read and understand by researching multiple solutions to the problem to ensure the result is the simplest solution. Do not over engineer a solution. It’s very easy to follow a complicated path to a solution once you have started without considering whether it is the best solution. Always take a step back and consider at a high level what you are trying to achieve then analyse whether what you have created is sensible and clear.

Each test should test one thing

I believe that each test you write should test just one thing. This is a common practice in unit testing, but I think it should also be applied where possible within functional tests; each test should test just one behaviour. I have seen a number of test frameworks that include tests that cover multiple pieces of functionality. For example, here is a sample BDD test for testing Stack Overflow:

Scenario: Stack Overflow sorting

Given Stack Overflow homepage is open
When the user searches for “BDD”
Then questions relating to “BDD” are shown
When the user selects the “newest” filter
Then the results are sorted with the newest first

Whilst this may be a valid user journey and needs to be tested, it is testing two things; the search functionality and the sorting functionality. The issue here is that the error reporting is going to be unclear. When this test fails, the person monitoring the tests will see something like this message in the report:

“FAILED: Stack Overflow sorting”

This shows the scenario has failed and indicates that it is the ‘sorting’ that is broken. However, the test could have failed because the search functionality is broken, and the test never actually ran the sorting steps.

This has given the user a red herring and they would have to spend time investigating the test to understand the failure more clearly. If we were to split this out into two tests, one for searching and one for sorting; the error messaging is going to be much clearer and the maintenance it is going to be much simpler.

There are arguments to why you would test multiple features in one test. So, things such as the set up of test data, is time consuming so why not include an additional test whilst the system is set up? I find that it makes maintenance cumbersome. The tests become less readable and they are also larger than if they were split out causing more work investigating failures in the future.


Documentation in a number of situations can be counter-productive because it quickly, or even immediately, becomes out-of-date as soon as it is published. However, I still believe documentation has value in aiding maintainability. I have found that adding a short ReadMe file within your solution can work really well in assisting future contributors.

Some of the most useful examples I have seen and used is to highlight design patterns and practices being used within the solution.
For example, I’ve included a quick summary of how the page objects should inherit from a base object from an external library which was created for this project. This was done as it included some common code that was required on all page objects.

When onboarding someone to the framework they asked about the base object being used and I was able to point them to ReadMe – which provided them all the information they needed. I have also used ReadMe to detail what tags used in the BDD feature files represent, as some of these tags would prevent the test scenario from running in certain CD pipelines.

This is the sort of information that is not necessarily obvious in the test solution alone. Including details like this within the ReadMe file can save the user time and allows them to maintain the solution easily in the way it was intended.
Another quick form of documentation I would recommend, at least for .Net projects, is to use summaries on functions. This allows the caller to get a quick understanding of the method without having to take a look at the code. E.g.:

I have found that this feature is often overlooked and without it I have had to look at the implementation in order to get confidence in using the function. It’s a small addition that can really help provide clarity and confidence in the automated tests.

So, there you have it, some insight into creating maintainable automated tests can be easily achieved without being too technical.

Leave a Reply

Your email address will not be published. Required fields are marked *

The browser you're using is out of date. Please update for better security, speed and experience on this site.