When we first start to dip our toes into the deep waters of automated software testing, all those different kinds of tests can feel intimidating. In this article, I give you a quick overview of the most crucial testing strategies. After reading this tutorial, you should have a much clearer picture of the strengths and weaknesses of the different kinds of testing methodologies.
When it comes to automated testing, there are a lot of different names for certain types of tests thrown around: E2E testing, acceptance testing, integration testing, unit testing,… to name a few. As far as I know, there is no official authority that could decide this once and for all but I, for my part, settled with the following three:
- E2E Tests: test a system including all of its surrounding infrastructure (data persistence, third-party APIs,…).
- Acceptance Tests: test acceptance criteria of a system from the perspective of a user.
- Unit Tests: test the functionality of a small unit (component or function) from the consumer’s perspective (developer, user, or other parts of the system).
Even companies that don’t have an explicit testing strategy do some kind of testing. Namely one of the most expensive forms of testing: disorganized manual testing by developers and other stakeholders. But what are more reasonable testing strategies we can follow?
Scientific testing (€€€€€)
Ideally, we want to test our system under 100% real-world conditions. That means a representative set of users on the production system (or a perfect copy of the production system).
If we have unlimited money and time, this is the way to go.
Manual testing (€€€€)
The second best would be extensive manual testing through QA people on a production-like system under production-like conditions.
If we have a lot of money and time or work in an industry that demands it (aviation, health care, finance) and pays for it, this might be a realistic (or necessary) approach.
Automated E2E testing (€€€)
Now we come into the financially feasible for most companies territory. Automated real E2E testing on a production-like system.
Because such tests are complicated to set up, time-consuming to maintain, and in most cases also pretty slow, they’re still costly. But they can give us a lot of confidence for relatively cheap (compared to complete manual testing).
Automated acceptance testing (€€)
Unlike complete E2E testing, automated acceptance testing focuses on testing the acceptance criteria without testing the whole system, including its infrastructure.
If done right, such tests are very fast, stable, and straightforward to implement. Of course, we don’t get the ample confidence of E2E tests because we don’t test if our infrastructure is working correctly, but we still get enough confidence for relatively cheap.
Unit tests (€)
Unit tests are fast. Writing them is fast and running them is fast. And if we follow best practices, maintenance is also straightforward. But they don’t test if we have wired up all of the different parts of our application correctly. Therefore, the amount of confidence they can give us that our application works as a whole is limited.
Still, there is no good reason for not writing unit tests.
How to decide which strategy to use?
Typically, we should use all of the methods above but focus heavily on automated acceptance tests and unit tests! Because in most situations mocking our infrastructure is fine.
More often than not, accurate E2E testing leads to diminishing returns. The minimal gain in confidence they get us is not worth the additional costs, not in every instance.
Think of it in risk management terms: the combination of unit tests and automated acceptance tests (with occasional manual testing) might help us to reduce the risk of deploying a bug to production from 50% to 5% per deploy. Eliminating the last 5% by doing extensive E2E and manual testing might get us down to <1% but at what cost? The last 5% might cost us as much as the previous 45% (and sometimes even more). And then, there is also the additional cost of maintaining those tests and slower deployments because those testing strategies are a lot slower.
If we write software that keeps airplanes in the air, we might not have a choice. But if we build a run-of-the-mill SaaS application, we have.