> If my unit tests are testing all of my public interfaces with "Given input X, expect output Y" and they're all passing, that gives me pretty good confidence that the system is in good shape.
Unfortunately, in my experience, most production bugs look more like this:
Module A: Returns X + 1
Implementation
int A(int x) {
if (x == 11) crash();
return x + 1;
}
Unit test:
assert(A(1) == 2);
assert(A(5) == 6);
Production:
A(11);
The problem compounds as the parameter space of your system grows. Unless your unit tests exercise the power set of your parameters and results for each module, there is a likelihood of missing bugs like this.
This is exactly my point though, you wouldn't catch this in an e2e test either unless you try with an "11". If you're explicitly trying the "11" in an e2e test, why not just do it in a unit test instead? Once you hit this bug once, you can add an "assert(A(11) == 12);" and move on with confidence. If you want to test this specific scenario in an extra e2e test, you could potentially be adding another 2+ mins to every other CI/CD run that ever happens on the project.
Any good unit tests should at a minimum test the min/max/expected cases, as well as any known special cases. If there are unknown special cases, you're probably not any more likely to find them in e2e tests than you are in unit tests.
To be clear, I was trying to indicate that there would be an interaction between two modules that is not tested via mocks, but is common in production. That is the most common cause of bugs, and also the type that e2e tests tend to catch quite well.
Unfortunately, in my experience, most production bugs look more like this:
The problem compounds as the parameter space of your system grows. Unless your unit tests exercise the power set of your parameters and results for each module, there is a likelihood of missing bugs like this.