In Chapter 5, “Transitioning Typical Processes,” we mentioned Antony Marcano’s blog post about defect tracking systems being a hidden backlog in agile teams. Antony shares his ideas about how to bring that secret out into the open.
XP publications suggest that if you find a bug you should write an automated test reproducing it. Many teams file a bug report and then write a separate automated test. I’ve found that this results in duplication of effort—and therefore waste. When we write a bug report, we state the steps, what should have happened (expectation), and what actually happened (anti-expectation). An automated test tells you the same things—steps, expectation, and running it for the first time should demonstrate the anti-expectation. When you are able to write an automated acceptance test as easily as you write a bug-report
Bug metrics are all that remain. Bug metrics are traditionally used to help predict when software would be ready for release or highlight whether quality is improving or worsening. In test-first approaches, rather than telling us if quality is improving or worsening, it tells us how good we were at predicting tests—that is, how big the gaps were in our original thinking. This is useful information for retrospectives and can be achieved simply by tagging each test with details of when it was identified—story elaboration, post-implementation exploration, or in production. As for predicting when we will be able to release—when we are completing software of “releasable quality” every iteration—this job is handled by burn-down/burn-up charts and the like.
With one new project I was working on, I suggested that we start using a bug-tracking system when the need for one was compelling. We captured the output of exploratory testing performed inside the iteration as automated tests rather than bug reports. We determined whether the test belonged to the current story, another story, or whether these tests inspired new stories. We managed these stories as we would any other story and used burn-down charts to predict how much scope would be done by the end of the iteration. We never even set up a bug-tracking system in the end.
There is a difference between typical user stories and bug-inspired user stories, however. Previously our stories and tests only dealt with missing behaviors (i.e., features we know we want to implement in the future). Now, they also started to represent
The “rather than” was understood by the customer to mean “that’s something that happens currently”—which is a misbehavior rather than merely a yet-to-be-implemented behavior.
Using this test-only approach to capturing bugs, I’ve noticed that bug-inspired stories are prioritized more as equals to the new-feature user stories, whereas before they often gave more attention to the “cool new features” in the product backlog than the misbehaviors described in the bug tracking. That’s when I realized that bug-tracking systems are essentially hidden, or
On some teams, however, the opposite is true. Fix-all-bugs policies can give more attention to bugs at the expense of perhaps more important new features in the main backlog.
Now, if I’m coaching a team mid-project, I help them to find better and faster ways of writing automated tests. I help them use those improvements in writing bug-derived automated tests. I help them find the appropriate story—new or existing—and help them harness the aggregate information useful to retrospectives. Eventually, they come to the same realization that I did: Traditional bug tracking starts to feel wasteful and redundant. That’s when they decide that they no longer want or need a hidden backlog.
If bugs are simply logged in a DTS, important information might be effectively lost from the project. When we write acceptance tests to drive development, we tend to focus on desired behavior. Learning about undesired behavior from a defect, and turning that into stories is a vital addition to producing the right functionality.