White cards, such as those shown in the first row of Figure 15-8, were regular tasks, blue cards designated technical stories such as refactoring or spikes, and pink cards, shown toward the right-hand side of the board as the darkest color, were bugs that need to be addressed. It is easy to see that this picture was taken at the beginning of an iteration because there are no colored circles on each card. In the top right-hand corner, you can see the legend. Blue stickers meant it has been coded, green would indicate done (tested), and red meant the task has been deemed not completed or a bug was rejected as not fixed. As a task or story was completed (i.e., green sticker), it was moved to the right of the board.
—Janet
Lisa’s Story
For more than four years, our story board was a couple of sheets of sheet metal, painted in company colors, using color-coded index cards attached to the board with magnets. Figure 15-9 shows a picture of it early in an iteration. Our task cards were also color-coded: white for development tasks, green for coding tasks, yellow and red for bugs, and striped for cards not originally planned in the iteration. The board was so effective in indicating our progress that we eventually stopped bothering with a task burndown chart. It let us focus on completing one story at a time. We also used it to post other big visible charts, such as a big red sign showing the build had failed. We loved our board.
Figure 15-9 Another sample story board. Used with permission of Mike Thomas. Copyright 2008.
Then, one of our team members moved overseas. We tried using a spreadsheet along with our physical story board, but our remote teammate found the spreadsheet too hard to use. We tried several software packages designed for Scrum teams, but they were so different from our real story board that we couldn’t adjust to using them. We finally found a product (Mingle) that looked and worked enough like our physical board that everyone, including our remote person, could use it. We painted our old story board white, and now we can project the story board on the wall during stand-up meetings.
—Lisa
Distributed teams need some kind of online story board. This might be a spreadsheet, or specialized software that mimics a physical story board as Mingle does.
Communicating Test Results
Earlier, we talked about planning how to track test results. Now we want to talk about effectively communicating them. Test results are one of the most important ways to measure progress, see whether new tests are being written and run for each story, and whether they’re all passing. Some teams post big visible charts of the number of tests written, run, and passed. Others have their build process email automated test results to team members and stakeholders. Some continuous integration tools provide GUI tools to monitor builds and build results.
We’ve heard of teams that have a projector hooked up to the machine that runs FitNesse tests on a continuous build and displays the test results at all times. Test results are a concrete depiction of the team’s progress. If the number of tests doesn’t go up every day or every iteration, that might indicate a problem. Either the team isn’t writing tests (assuming they’re developing test-first), or they aren’t getting much code completed. Of course, it’s possible they are ripping out old code and the tests that went with it. It’s important to analyze why trends are going the wrong way. The next section gives you some ideas about the types of metrics you may want to gather and display.
However your team decides they want to communicate your progress, make sure you think about it up front and everyone gets value from it.
Release Metrics
We include this section here, because it is important to understand what metrics you want to gather from the very beginning of a release. These metrics should give you continual feedback about how development is proceeding, so that you can respond to unexpected events and change your process as needed. Remember, you need to understand what problem you are trying to solve with your metrics so that you can track the right ones. The metrics we talk about here are just some examples that you may choose to track.
Many agile teams track the number of tests at each level: unit, functional, story tests, GUI, load, and so on. The trend is more important than the number. We get a warm fuzzy feeling seeing the number of tests go up. A number without context is just a number, though. For example, if a team says it has 1000 tests, what does that mean? Do 1000 tests give 10% or 90% coverage? What happens when code that has tests is removed?