A few months back we had a conference with work, and we discussed the upcoming year and how we could improve the system from a customer perspective. A recurrent problem of ours, is that we lack good test coverage, and as a result – unfortunately- some bugs are discovered late in the deployment process. I brought up test coverage tools, and how/if we should be using them to get an overview, measure progress, and maybe even fail builds. This resulted in (not so) Stupid Question 325: Should I use test coverage & use thresholds to pass builds? I recorded the video a month ago, but I’ve had to take a little break from blogging to finish my ASP.NET book and therefore I didn’t write a post.
Test coverage gives you some metrics, in regards to how much of your code is covered by tests. It sounds straight forward, and useful, but its not that simple. Test coverage can be measured in different ways, each with different criteria. Line coverage looks at whether or not that line has been executed during a test. Function coverage checks if the methods or functions have been called, while statement coverage looks at whether or not each statement has been executed. That won’t give you statement coverage- if every statement has been executed, nor conditional coverage (testing sub expressions).
Therefore, for us, the first challenge is knowing what to look at. 100% coverage doesn’t mean the solution is 100% tested. The second problem is what to aim for. What does the number even mean? To get 100% coverage, with 100% testing we would have to test auto properties and scaffolded code. Some argue that you should, as properties are methods that could change, while some say its unnecessary and creates clutter. Combine many tests with in-depth analysis and you also get performance problems. Ill get back to this later. Say that we decide on 80% coverage, but what if we only test the easy part of the code to get to that metric? I’ve worked on systems that use test coverage as a build condition in the deployment pipeline, and I’ve seen firsthand how developers, me included, are tempted to write bad tests just to get a build to pass. Another problem is how this can slow down deploys. Some analyzers take hours to run, and if you run them per build…. Well. It’s going to get slow. If you don’t, and instead run them nightly, then how can you rely on the metric at all as a deployment deal breaker unless you do nightly builds only?
All in all, test coverage tools are fun and the metrics they spit out are interesting, but we won’t be using them to pass or fail builds. Instead, I’ll be collecting some metrics over time to track change over time, and I will be using them to find areas that need more attention test-wise.
I’ll keep you posted!
What is you experience with test coverage? Yay or nay?