My team considers worst-case scenarios in order to help us identify customer tests. For example, we planned a story to rewrite the first step of a multistep account creation wizard with a couple of new options. We asked ourselves questions such as the following: “When the user submits that first page, what data is inserted in the database? Are any other updates triggered? Do we need to regression test the entire account setup process? What about activities the user account might do after setup?” We might need to test the entire life cycle of the account. We don’t have time to test more than necessary, so decisions about what to test are critical. The right tests help us mitigate the risk brought by the change.
—Lisa
Programmers can identify fragile parts of the code. Does the story involve stitching together legacy code with a new architecture? Does the code being changed interact with another system or depend on third-party software? By discussing potential impacts and risky areas with programmers and other team members, we can plan appropriate testing activities.
There’s another risk. We might get so involved writing detailed test cases up front that the team loses the forest in the trees; that is, we can forget the big picture while we concentrate on details that might prove irrelevant.
Peril: Forgetting the Big Picture
It’s easy to slip into the habit of testing only individual stories or basing your testing on what the programmer tells you about the code. If you find yourself finding integration problems between stories late in the release or that a lot of requirements are missing after the story is “done,” take steps to mitigate this peril.
Always consider how each individual story impacts other parts of the system. Use realistic test data, use concrete examples as the basis of your tests, and have a lot of whiteboard discussions (or their virtual equivalent) in order to make sure everyone understands the story. Make sure the programmers don’t start coding before any tests are written, and use exploratory testing to find gaps between stories.
Remember the end goal and the big picture.
As an agile team, we work in short iterations, so it’s important to time-box the time spent writing tests before we start. After each iteration is completed, take the time to evaluate whether more detail up front would have helped. Were there enough tests to keep the team on track? Was there a lot of wasted time because the story was misunderstood? Lisa’s team has found it best to write high-level story tests before coding, to write detailed test cases once coding starts, and then to do exploratory testing on the code as it’s delivered in order to give the team more information and help make needed adjustments.
Janet worked on a project that had some very intensive calculations. The time spent creating detailed examples and tests before coding started, in order to ensure that the calculations were done correctly, was time well spent. Understanding the domain, and the impact of each story, is critical to assessing the risk and choosing the correct mitigation strategy.
While business-facing tests can help mitigate risks, other types of tests are also critical. For example, many of the most serious issues are usually uncovered during manual exploratory testing. Performance, security, stability, and usability are also sources of risk. Tests to mitigate these other risks are discussed in the chapters on Quadrants 3 and 4.
Experiment and find ways that your team can balance using up-front detail and keeping focused on the big picture. The beauty of short agile iterations is that you have frequent opportunities to evaluate how your process is working so that you can make continual improvements.
Testability and Automation
When programmers on an agile team get ready to do test-driven development, they use the business-facing tests for the story in order to know what to code. Working from tests means that everyone thinks about the best way to design the code to make testing easier. The business-facing tests in Quadrant 2 are expressed as automated tests. They need to be clearly understood, easy to run, and provide quick feedback; otherwise, they won’t get used.
It’s possible to write manual test scripts for the programmers to execute before they check in code so that they can make sure they satisfied the customer’s conditions, but it’s not realistic to expect they’ll go to that much trouble for long. When meaningful business value has to be delivered every two weeks or every 30 days, information has to be direct and automatic. Inexperienced agile teams might accept the need to drive coding with automated tests at the developer test level more easily than at the customer test level. However, without the customer tests, the programmers have a much harder time knowing what unit tests to write.