We treat all test artifacts the same as source code from an organizational and revision control perspective. We use Subversion, and anyone who wants to run or edit the tests simply checks them out.
The latest Fit tests are available on a Confluence Wiki. We did this to support collaboration (team is distributed) and to leverage the strong capabilities of Confluence. Having the tests visible on the wiki was also helpful to others such as managers and other teams who did not want to check it out from the repository.
Prior to this, the QA team maintained test cases on a drive that was not accessible to anyone outside of QA. This meant that developers could not easily see what was being tested. Making the tests visible, transparent, and supported by a version control system (Subversion) really helped to break down barriers between developers and testers on the team.
Make sure your tests are managed with solid version control, but augment that with a way for everyone to use the tests in ways that drive the project forward and ensure the right value is delivered.
Organizing Test Results
Everyone involved with delivering software needs easy access to tests and test results. Another aspect of managing tests is keeping track of what tests are from prior iterations and need to keep passing, versus tests that are driving development in the current iteration and may not be passing yet. A continuous integration and build process runs tests for quick feedback on progress and to catch regression failures. Figure 14-4 shows an example of a test result report that’s understandable at a glance. One test failed, and the cause of the failure is clearly stated.
Figure 14-4 Test results from a home-grown test management tool
If you’re driving development with tests, and some of those tests aren’t passing yet, this shouldn’t fail a build. Some teams, such as Lisa’s, simply keep new tests out of the integration and build process until they pass for the first time. After that, they always need to pass. Other teams use rules in the build process itself to ignore failures from tests written to cover the code currently being developed.
As with any test automation tool, you can solve your test management problems with home-grown, open source, or commercial systems. The same criteria we described in the section on evaluating test tools can be applied to selecting a test management approach.
Test management is yet another area where agile values and principles, together with the whole-team approach, applies. Start simple. Experiment in small steps until you find the right combination of source code control, repositories, and build management that keeps tests and production code in synch. Evaluate your test management approach often, and make sure it accommodates all of the different users of tests. Identify what’s working and what’s missing, and plan tasks or even stories to try another tool or process to fill any gaps. Remember to keep test management lightweight and maintainable so that everyone will use it.
Managing Tests For Feedback
Megan Sumrell, an agile trainer and coach, describes how her team coordinates its build process and tests for optimum feedback.
We create a FitNesse test suite for each sprint. In that suite, we create a subwiki for each user story that holds its tests. As needed, we create a setup and teardown per test or suite. If for some reason we don’t complete a user story in the sprint, then we move the tests to the suite for sprint in which we do complete the story.
We scripted the following rule into our build: If any of the suites from the previous sprint fail, then the build breaks. However, if tests in the current sprint are failing, then do not fail the build.
Each test suite has a lengthy setup process, so when our FitNesse tests started taking longer than 10 minutes to run, our continuous integration build became too slow. We used symbolic links to create a suite of tests that serve as our smoke tests, running as part of our continuous integration build process. We run the complete set of FitNesse tests on a separate machine. We set it up to check the build server every five minutes. If a new build existed, then it would pull the build over and run the whole set of FitNesse tests. When it was done, it would then check the build server again every five minutes and after a new build existed, it would repeat the process.
Megan’s team took advantage of features built into their tools, such as symbolic links to organize FitNesse test suites for different purposes—one for a smoke test, others for complete regression testing. The team members get immediate feedback from the smoke tests, and they’ll know within an hour whether there’s a bug that the smoke tests missed.
Go Get Started