There’s another question I wanted to address with a separate email. This time, it’s about the choice between writing an integration test and relying on the fail fast principle.
1. Preamble
This question is for a section of my Unit Testing book (section 8.1.3, to be precise) where I discuss integration testing versus failing fast.
In that section, I mentioned the guideline for choosing appropriate integration test coverage for a business scenario:
-
Unit test as many edge cases of this scenario as possible
-
For an integration test, select the longest happy path in order to verify interactions with all out-of-process dependencies
-
Any edge cases that can’t be covered with unit tests should be covered with integration tests
There is an exception to the last part of the guideline. There’s no need to test an edge case if an incorrect execution of that edge case immediately fails the entire application.
In the book, I used the example with the User
domain class and the UserController
:
// User public void ChangeEmail(string newEmail) { if (CanChangeEmail() == false) throw new Exception("CanChangeEmail check failed"); /* changing the email */ } // UserController public string ChangeEmail(int userId, string newEmail) { User user = _repository.GetUserById(userId); bool canChangeEmail = user.CanChangeEmail(); if (canChangeEmail == false) return "Cannot change the email"; /* the rest of the method */ }
Here, User
implements a CanChangeEmail
method and makes its successful execution a precondition for ChangeEmail()
.
At the same time, the controller invokes CanChangeEmail()
and interrupts the operation if that method returns an error.
In the book, I argued (and I still do) that even though you could theoretically cover this edge case with an integration test, such a test doesn’t provide significant enough value.
If the controller tries to change the email without consulting with CanChangeEmail()
first, the application crashes. This bug reveals itself with the first execution and thus is easy to notice and fix. It also doesn’t lead to data corruption.
Making bugs manifest themselves quickly is called the Fail Fast principle, and it’s a viable alternative to integration testing.
(Note that unlike the call from the controller to CanChangeEmail()
, the presence of the precondition itself in User
should be tested. But that is better done with a unit test; there’s no need for an integration test.)
2. The question
Here’s the question I received for that section of the book:
I am assuming that the code
if (canChangeEmail == false) return "Cannot change the email";is there so that the application can handle the situation gracefully, whereas the precondition statement would just cause an "application crash".
Therefore, I am unsure why you would not cover the example edge case with a test. How will the bug "reveal itself with the first execution" if it is not executed by a test?
If it is not executed by a test, surely there is then the possibility that the first time it will be executed is by a user in production? Thus, resulting in the application crashing due to a (not unlikely) user-journey!
That’s a good point. Indeed, why is that fine for the user to experience this crash in production? Why not cover this edge case with an integration test?
Every decision to write a test is a judgment call. In the book, I tried to provide useful heuristics as to which call to make in each particular situation, but fundamentally, this choice comes down to weighing benefits of the test against its costs.
The costs of the test is the time it takes you to write it and then maintain throughout the project lifetime.
On the other hand, the benefit is the overall damage this test helps you to avoid. Such an overall damage consists of two components:
-
How damaging this scenario is to the user and/or the application
-
How likely this scenario is to happen
So, what would be the overall damage if you rely on the failing fast principle like in our example instead of integration testing for this particular edge case?
This edge case would crash the application, but that crash would manifest itself in returning a 500 error instead of 400, which for the user would show up as something like "Sorry, unexpected server error" instead of the proper validation failure. So yes, the experience isn’t great, but not terrible either.
What about the likelihood of this scenario? I would say it’s moderate. This isn’t the main (happy) path, but this scenario isn’t that obscure either.
So, we get the following:
Overall_damage = Damage * Likelihood = low * moderate = low
Overall_damage = low
Given that the benefits of the test aren’t that high (because the damage it helps to avoid is low), I recommend against writing such a test.
Always keep in mind the opportunity cost of all your code, including test code. If it doesn’t provide significant enough value, ditch it. A small set of highly valuable tests would serve you much, much better than a large set of mediocre tests.
--Vlad
https://enterprisecraftsmanship.com
Enjoy this message? Here are more things you might like:
Workshops — I offer a 2-day workshop for organizations on Domain-Driven Design and Unit Testing. Reply to this email to discuss.
Unit Testing Principles, Patterns and Practices — A book for people who already have some experience with unit testing and want to bring their skills to the next level.
Learn more »
My Pluralsight courses — The topics include Unit Testing, Domain-Driven Design, and more.
Learn more »