Other teams find value in documenting problems and fixes in a defect tracking system (DTS), especially problems that weren’t caught until after code was released. They may even look for patterns in the bugs that got to production and do root cause analysis to learn how to prevent similar issues from recurring. Still, defect systems don’t provide a good forum for face-to-face communication about how to produce higher-quality code.
Chapter 5, “Transitioning Typical Processes,” talks about why your team may or may not want to use a Defect Tracking System.
Lisa and her fellow testers prefer to talk to a programmer as soon as a problem is found. If the programmer can fix it immediately, there’s no need to log the bug anywhere. If no programmer is available immediately to work on the problem, and there’s a possibility the bug might be forgotten, they write a card for it or enter it into their DTS.
We’ve added this section to this chapter because this is when you run into the problem. You have been writing tests first, but are finding problems as you work with the programmer. Do you log a bug? If so, how? You’ve been doing your exploratory testing and found a bug from a story that was marked done. Do you log a bug for that? Let’s discuss more about defects and consider options that are open to you and your team.
Is It a Defect or Is It a Feature?
First, let’s talk about defects versus features. The age-old question in software development is, “What is a bug”? Some answers we’ve heard are: It’s a deviation from the requirements or it’s behavior that is not what was expected. Of course, there are some really obvious defects such as incorrect output or incorrect error messages. But what really matters is the user’s perception of the quality of the product. If the customer says it is a defect, then it is a defect.
In agile, we have the opportunity to work with customers to get things fixed to their satisfaction. Customers don’t have to try to think of every possible feature and detail up front. It is okay for them to change their minds when they see something.
In the end, does it really matter if it is a bug or a feature if it needs to be fixed? The customer chooses priorities and the value proposition. If software quality is a higher priority for the customer than getting all of the new features, then we should try to fix all defects as we find them.
Customers on the team use their knowledge to give the best advice they can to the team on day-to-day development. However, when a product goes to UAT and is exposed to a larger customer base, there will always be requests in the form of bugs or new enhancements.
Technical Debt
One way of thinking about defects is as technical debt. The longer a defect stays in the system and goes undetected, the greater the impact. It also is true that leaving bugs festering in a code base has a negative effect on code quality, system intuitiveness, system flexibility, team morale, and velocity. Fixing one defect in buggy code may reveal more, so maintenance tasks take longer.
Chapter 6, “The Purpose of Testing,” explains how tests help manage technical debt.
Zero Bug Tolerance
Janet encourages teams that she works with to strive for “zero tolerance” toward bug counts. New agile teams usually have a hard time believing it can be done. In one organization Janet was working with, she challenged each of the five project teams to see how close they could come to zero bugs outstanding at the end of each iteration, and zero at release time.
Zero Bug Iterations
Jakub Oleszkiewicz, the QA manager at NT Services [2008], recounts how his team learned how to finish each iteration with no bugs carried over to the next one.
I think it really comes down to exceptional communication between the testers, the developers, and the business analysts. Discipline was also key, because we set a goal to close off iterations with fully developed, functional, deployable, and defect-free features while striving to avoid falling into a waterfall trap. To us, avoiding waterfall meant we had to maintain alignment with code and test activities; we tried to plan an iteration’s activities so that a given feature’s test cases were designed and automated at the same time as that feature’s code was written. We quickly found that we were practicing a form of test-driven development. I don’t think it was pure TDD, because we weren’t actually executing the tests until code was checked in, but we were developing the tests as developers wrote code, and developers were asking us how our tests were structured and what our expected results were. Conversely, we regularly asked the developers how they were implementing a given feature. This kind of two-way questioning often elevated inconsistencies in how requirements were interpreted and ultimately highlighted defects in our interpretations before code was actually committed.