Testing Your Testing Triangle

How many of us are familiar with the 'testing triangle'? And yet how many of us have something that looks completely different? This usually isn't due to ignorance since even the most well-intentioned teams can wake up one morning to a triangle that looks a little unbalanced. I've seen triangles that look top-heavy, ones that miss entire levels and countless testing trapeziums. I want to address what I see as the main reasons that this divergence occurs and offer ideas that should help mitigate these problems.

Levels of Testing

The Testing Triangle often has 3 or 4 different levels, but often the boundaries between the different levels can get a little hazy. Not only that but different stakeholders (developers, testers and product owners) can have different definitions for each level. So to set out my stall, here are the most common levels/definitions for the standard triangle.

Level Description
End-To-End Tests Tests that replicate a User's workflow, usually using the UI.
Acceptance Tests Formal tests executed to ensure that the system meets business requirements.
Integration Tests Tests that verify that different modules/service can work together.
Unit Tests Small, isolated tests that target specific units* of a system.
*I'm aware that even the word 'unit' can be difficult to define but that is a topic for another blog post.

Basic Reasons for the Triangle failure

If we are being honest, your triangle probably doesn't look like the triangle above. When these differences occur it is usually because the maintenance of testing gets de-prioritised. Unfortunately, testing is widely considered to be a corner that can be cut to save time. Also since the 'structural stability of our testing triangle' is not a metric that many businesses monitor it can also be something that is easily forgotten until it is too late.

This is because the benefits of testing are hypothetical. In the same way that it is difficult to measure how many road accidents are prevented by enforcing a minimum level of thread on a tire, it is difficult to measure the value of testing. While it is difficult to put a figure on how much 'confidence' is worth, it is all too easy to think of an instance where something has to be delayed because testing hasn't been completed, or when it takes longer to test a feature then it does to develop a feature.

Because of this, as deadlines approach and as the desire to develop and deliver features as quickly as possible rises, companies start to ignore all of the benefits of a robust testing process. Project managers start asking if we are spending too long testing, or if we can go without testing at a certain level.

Developers and testers also struggle in interactions when being questioned about the advantages of the different levels of testing. When questioned "Can we cut out Integration and Acceptance tests and cover all of that with End-To-End tests and Unit Tests?" it is very difficult to answer 'No'. Technically, you can cover everything you need to cover with just the two extremes of the triangle, but should you? It could be argued that this method is (at least in the short term) more cost-effective, but I prefer to imagine that cost saving as a loan that you now have to pay interest on. By reducing test performance, making tests harder to write/maintain and making it harder to resolve issues when they are found - that up-front cost has been converted into a cost that is added onto every new feature/bug fix that is developed.

My Triangle

My triangle doesn't look like the triangle above either, but deliberately. I think the main reason that our testing triangles start to diverge from 'ideal' is that the classical triangle just isn't ideal anymore. I think that this triangle is a hangover from the monolithic approach to software development and that it needs to be re-imagined.

The triangle that I have developed at Varealis is mostly the same but it has been restructured to better support a micro-service platform that is focused on the continuous deployment of individual, independent services.

Level Description
End-To-End Tests Tests that replicate a User's workflow, usually using the UI.
Pact Tests Tests that ensure that an application meets it's public contracts.
Integration Tests Tests that verify that services can work with it's dependencies.
Acceptance Tests Formal tests executed to ensure that the system meets business requirements.
Unit Tests Small, isolated tests that target specific units* of a system.

The biggest change to this triangle is the introduction of Pact Tests, which help us redefine the responsibilities of the other layers in the triangle, allowing for Integration and Acceptance tests to be swapped. Pact Tests are focused on a service/application's contracts. They are designed to ensure that services implement their public interfaces and therefore that services can cooperate expectedly. Pact Tests are probably interesting enough to write an entire blog post about, to be honest.

By covering inter-service communication with Pact Tests, we no longer have to rely on more brittle integration (or End-to-End tests) here. This means we can narrow the scope of integration tests to only cover the integration of an individual micro-service and it's other dependencies (I.e. databases, message buses or third-party APIs).

By moving Acceptance tests down below the Integration tests we can exclude third-party dependencies and isolate the business logic as implemented by the service. At this level we can control our service's dependencies, swapping them out with mocks or In-Memory implementations (for example MassTransit's InMemory Message Bus or Entity Framework's InMemory database). Not only does this drastically improve the speed of the tests, but it also means they can be run in more places, cover scenarios that are otherwise hard to replicate in a real environment and cover integrations that unit tests are not able to cover. In another blog post, I'll go into further detail and show you how easily this can be achieved.

Our Goal

Testing is something that I, and by extension Varealis, value highly. We are constantly striving to improve our best practices and we want to share our progress whenever possible.

Jamie Peacock
Published on 29/08/2019
Founder of Varealis

Great code builds great companies.

© Copyright Varealis 2024