• “Minutes after” coding each production module, create a testing program using the screen format to test via the UI. The UI interface for the test is created prior to the production program, because the UI testing interface is the referenced interface for the production module. The impetus for running a test looms large for the programmer, because most of the coding for the test is already done.
• Use standard test data sets, which are unchanging, canonical test data, to drive the tests.
• This approach, in which the test programs are almost auto-generated, lends itself to automation with a record/playback tool that would capture data inputs and outputs, with tests run in a continuous build, using RPGUnit.
For more about RPGUnit, see www.RPGUnit.org.
Your team can find an approach to designing for testability that works for you. The secret is the whole-team commitment to testing and quality. When a team is constantly working to write tests and make them pass, it finds a way to get it done. Teams should take time to consider how they can create an architecture that will make automated tests easy to create, inexpensive to maintain, and long-lived. Don’t be afraid to revisit the architecture if automated tests don’t return enough value for the investment in them.
Timely Feedback
The biggest value of unit tests is in the speed of their feedback. In our opinion, a continuous integration and build process that runs the unit tests should finish within ten minutes. If each programmer checks code in several times a day, a longer build and test process will cause changes to start stacking up. As a tester, it can be frustrating to have to wait a long time for new functionality or a bug fix. If there’s a compile error or unit test failure, the delay gets even worse, especially if it’s almost time to go home!
A build and test process that runs tests above the unit level, such as functional API tests or GUI tests, is going to take longer. Have at least one build process that runs quickly, and a second that runs the slower tests. There should be at least one daily “build” that runs all of the slower functional tests. However, even that can be unwieldy. When a test fails and the problem is fixed, how long will it take to know for sure that the build passes again?
If your build and test process takes too long, ask your team to analyze the cause of the slowdown and take steps to speed up the build. Here are a few examples.
Lisa’s Story
Early in my current team’s agile evolution, we had few unit tests, so we included a few GUI smoke tests in our continual build, which kicked off on every check-in to the source code control system. When we had enough unit tests to feel good about knowing when code was broken, we moved the GUI tests and the FitNesse functional tests into a separate build and test process that ran at night, on the same machine as our continual build.
Our continual ongoing build started out taking less than 10 minutes, but soon was taking more than 15 minutes to complete. We wrote task cards to diagnose and fix the problem. The unit tests that the programmers had written early on weren’t well designed, because nobody was sure of the best way to write unit tests. Time was budgeted to refactor the unit tests, use mock data access objects instead of the real database, and redesign tests for speed. This got the build to around eight minutes. Every time it has started to creep up, we’ve addressed the problem with refactoring, removing unnecessary tests, upgrading the hardware, and choosing different software that helped the build run faster.
As our functional tests covered more code, the nightly build broke more often. Because the nightly build ran on the same machine as the continual ongoing one, the only way to verify that the build was “green” again was to stop the ongoing build, which removed our fast feedback. This started to waste everyone’s time. We bought and set up another build machine for the longer build, which now also runs continuously. This was much less expensive than spending so much time keeping two builds running on the same machine, and now we get quick feedback from our functional tests as well.
—Lisa