We know from experience that new teams are often missing specialists or expertise that might be key to their success. Lisa’s team has run into obstacles so large that the only thing to do was sit back and ask, “What role are we missing on our team that is holding us back? What do we need? Another developer, another tester, a database designer?” We all know that testing is a vast field. Maybe you need someone experienced in testing on an agile team. Or maybe you need a performance testing specialist. It’s critical that you take the time to analyze what roles your product needs to be successful, and if you need to fill them from outside the team, do it.
It’s critical that everyone already on the product team understand their role or figure out what their role is now that they’re part of a new agile team. Doing this requires time and training.
Lack of Training
We hosted a session in the “Conference within a Conference” at Agile 2007 that asked people what testing-related problems they were having on their agile teams. One of the attendees told us that they split up their test organization as advocated by the agile literature. However, they put the testers into development units without any training; within three months, all of the testers had quit because they didn’t understand their new roles. Problems like these can be prevented with the right training and coaching.
When we started working with our first agile teams, there weren’t many resources available to help us learn what agile testers should do or how we should work together with our teams. Today, you can find many practitioners who can help train testers to adapt to an agile environment and help test teams make the agile transition. Local user groups, conferences, seminars, online instruction, and mailing lists all provide valuable resources to testers and managers wanting to learn. Don’t be afraid to seek help when you need it. Good coaching gives a good return on your investment.
Not Understanding Agile Concepts
Not all agile teams are the same. There are lots of different approaches to agile development, such as XP, Scrum, Crystal, FDD, DSDM, OpenUP, and various mixes of those. Some self-titled “agile” teams are not, in our opinion, really practicing agile. Plenty of teams simply adopt practices that work for them regardless of the original source, or they invent their own. That’s fine, but if they don’t follow any of the core agile values and principles, we question giving them an agile label. Releasing every month and dispensing with documentation does not equate to agile development!
If different team members have opposing notions of what constitutes “agile,” which practices they should use, or how those practices are supposed to be practiced, there’s going to be trouble. For example, if you’re a tester who is pushing for the team to implement continuous integration, but the programmers simply refuse to try, you’re in a bad spot. If you’re a programmer who is unsuccessful at getting involved in some practices, such as driving development with business-facing tests, you’re also in for conflict.
The team must reach consensus on how to proceed in order to make a successful transition to agile. Many of the agile development practices are synergistic, so if they are used in isolation, they might not provide the benefits that teams are looking for. Perhaps the team can agree to experiment with certain practices for a given number of iterations and evaluate the results. It could decide to seek external input to help them understand the practices and how they fit together. Diverse viewpoints are good for a team, but everyone needs to be headed in the same direction.
Several people we’ve talked to described the “mini-waterfall” phenomenon that often occurs when a traditional software development organization implements an agile development process. The organization replaces a six-month or year-long development cycle with a two- or four-week one, and just tries to squeeze all of the traditional SDLC phases into that short period. Naturally, they keep having the same problems as they had before. Figure 3-1 shows an “ideal” version of the mini-waterfall where there is a code-and-fix phase and then testing—the testing comes after coding is completed but before the next iteration starts. However, what really happens is that testing gets squeezed into the end of the iteration and usually drags over into the next iteration. The programmers don’t have much to fix yet, so they start working on the next iteration. Before long, some teams are always an iteration “behind” with their testing, and release dates get postponed just as they always did.
Figure 3-1 A mini-waterfall process