Writing Tests for Coverage
A couple of years ago I worked with a team that took over a software project from a different company. The project had been developed under a lot of time pressure and the team initially working on that project had decided to skip writing tests to save time. The focus was on implementing critical features and solveing all the bugs that had accumulated in the backlog.
"Skip tests and focus on bug fixes" is kinda like saying you don't have the money to eat a balanced diet, because you need it for medicine to try and fix your lifestyle-induced health issues. Or you don't have time and energy to exercise, because you are too busy with these health problems.
Skipping on quality measures in order to save costs and time is always a bad idea, but, unfortunately, a common one. In software development and in other areas of engineering, too.
I was asked to create a test strategy document for the new team composition, where I pushed for writing unit tests (among other things). After some discussion, project management agreed to make some time available for writing tests. However, the consensus reached was to write tests in the aim to reach 50% unit test code coverage.
The argument was that there was simply not enough time to get everything under test, and developers should instead focus on testing the parts of the code that needed it the most and/or were easiest to get under test.
As a result tests were partially auto-generated for code where this was easiest to do (e.g. POJOs with getters and setters).
Tests were also, predictably, written primarily to meet the coverage objective with most tests not asserting behavior.
Imagine a lot of unit tests with the only assertion being invokeMethod().doesNotThrowException()
.
The most complex functionality of the code base, which made up some 30% of all code, was not tested at all. This part was not written with testability in mind and therefore if you wanted to increase coverage, you were better off writing tests for simpler components.
Test Coverage as a Metric
A word about code coverage, as it seems to be a somewhat controversial topic. Code coverage is a metric that indicates how much of your code is executed when running your tests. It does not indicate whether your tests are good or whether they test the right things.
I sometimes have discussions with people who say that code coverage is therefore a poor metric. And while I agree that 99% line coverage does not tell you that you have great test coverage, what does 1% line coverage tell you?
First, look at what your coverage numbers are. Then judge the quality of your tests. If you have no coverage, there are no tests to judge - and therefore none that can be good.
TL;DR:
- High test coverage: maybe good
- Low test coverage: certainly bad
The Fallout
A lot went wrong in this project. Writing unit tests for coverage was only one aspect, which I was unhappy with.
But it was one that really stung, because as an advocate for test automation, my efforts had several negative effects:
- The tests written were not useful
- The opinion on unit tests was not helped by that fact, and it became even harder to argue for writing tests
- Time was wasted on writing tests that did not help the project
- Much focus was instead put on writing E2E tests, which turned out to be a massive time sink with little payoff
Lessons Learned
I learned a lot in this project. Unfortunately, most of the lessons were learned because so much went wrong. I might write more about this project elsewhere with focus on management, development practices, or team composition. For now though, here are some things I took away regarding test automation:
If you argue for writing tests, always make sure you are allowed to lead by example.
I gave up too quickly pushing the idea of being allowed to write some tests myself and show how it should be done. I could have done so in pair-programming or mob-programming sessions.
Either write useful tests or don't write any at all.
I had the hope of other team members "being allowed" to write tests would use the opportunity to write good tests. In a team that did not see the value of tests, this was a naive assumption. Warning people about the futility of writing tests for coverage is not enough. If you are the one pushing for writing tests and the tests turn out to be a waste of time, you are the one who will be blamed for it.
Focus on individual contributors.
Later in this project I set up a separate repository for a service we needed to feed the main application data. There I paired with two junior developers and showed them how to write code guided by tests. The most satisfaction I got out of this project was seeing those developers take up some good programming practices (not just writing tests). I am still in contact with one of them, years after leaving the company.