The #1 thin slice is to insert a new, empty page based on a property. While it’s not much for our customers to look at, it lets us test the bridge between old and new code, and then verify that the plan establishment navigation still works properly. Slice #2 introduces some business logic: If no mutual fund portfolios are available for the company, skip to the fund selection step, which we’re not changing yet. If there are fund portfolios available, display them on the new step 3. In slice #3, we change the fund selection step, adding logic to display the funds that make up the portfolios. Slice #4 adds navigational elements between various steps in the establishment process.
We wrote customer tests to define each slice. As the programmers completed each one, we manually tested it and showed it to our customers. Any problems found were fixed immediately. We wrote an automated GUI test for slice #1, and added to it as the remaining steps were finished. The story was difficult because of the old legacy code interacting with the new architecture, but the stepwise approach made implementation smooth, and saved time.
When we draw diagrams such as this to break stories into slices, we upload photos of them to our team wiki so our remote team member can see them too. As each step is finished, we check it off in order to provide instant visual feedback.
—Lisa
If the task of writing customer tests for a story seems confusing or overwhelming, your team might need to break the story into smaller steps or chunks. Finishing stories a small step at a time helps spread out the testing effort so that it doesn’t get pushed to the end of the iteration. It also gives you a better picture of your progress and helps you know when you’re done—a subject we’ll explore in the next section.
Check the bibliography for Gerard Meszaros’s article “Using Storyotypes to Split Bloated XP Stories.”
How Do We Know We’re Done?
We have our business-facing tests that support the team—those tests that have been written to ensure the conditions of satisfaction have been met. They start with the happy path and show that the story meets the intended need. They cover various user scenarios and ensure that other parts of the system aren’t adversely affected. These tests have been run, and they pass (or at least they’ve identified issues to be fixed).
Are we done now? We could be, but we’re not sure yet. The true test is whether the software’s user can perform the action the story was supposed to provide. Activities from Quadrants 3 and 4, such as exploratory testing, usability testing, and performance testing will help us find out. For now, we just need to do some customer tests to ensure that we have captured all of the requirements. The business users or product owners are the right people to determine whether every requirement has been delivered, so they’re the right people to do the exploring at this stage.
When the tests all pass and any missed requirements have been identified, we are done for the purpose of supporting the programmers in their quest for code that does the “right thing.” It does not mean we are done testing. We’ll talk much more about that in the chapters that follow.
Another goal of customer tests is to identify high-risk areas and make sure the code is written to solidify those. Risk management is an essential practice in any software development methodology, and testers play a role in identifying and mitigating risks.
Tests Mitigate Risk
Customer tests are written not only to define expected behavior of the code but to manage risk. Driving development with tests doesn’t mean we’ll identify every single requirement up front or be able to predict perfectly when we’re done. It does give us a chance to identify risks and mitigate them with executable test cases. Risk analysis isn’t a new technique. Agile development inherently mitigates some risks by prioritizing business value into small, tested deliverable pieces and by having customer involvement in incremental acceptance. However, we should still brainstorm potential events, the probability they might occur, and the impact on the organization if they do happen so that the right mitigation strategy can be employed.
Coding to predefined tests doesn’t work well if the tests are for improbable edge cases. While we don’t want to test only the happy path, it’s a good place to start. After the happy path is known, we can define the highest risk scenarios—cases that not only have a bad outcome but also have a good possibility of happening.
In addition to asking the customer team questions such as “What’s the worst thing that could happen?,” ask the programmers questions like these: “What are the post conditions of this section of code? What should be persisted in the database? What behavior should we look for down the line?” Specify tests to cover potentially risky outcomes of an action.
Lisa’s Story