XYZ created an automation team to build the framework and tests. Creating the framework itself was a time-consuming, technically challenging task. Some of the framework classes themselves were complicated enough to warrant unit tests of their own. After a sufficient amount of test framework was constructed, we began work on actual application tests, using the Ruby RSpec library. RSpec is itself a DSL for test specifications. One of its strengths is the use of simple declarative statements to describe behavior and expectations. One might, for example, write a test using the statement
“
filling in the body of the test with calls to the Selenium-based test framework we had created.
Nearly a year later, we had automated nearly two thousand test cases. Although the majority of the application was covered by automation, other portions of the application required manual testing—we had been forced to make choices and prioritize our efforts. Every week the test suite took longer to run than the preceding week; it now took nearly six hours to complete, and we had begun to think about running tests in parallel. We had not yet managed to expand our testing across all of the browsers supported by the application. The enthusiasm that automation generated had waned somewhat, and we found it necessary to carefully manage expectations, both with upper management and with other engineers. Despite these issues, Selenium was a clear win, for had we not invested heavily in test automation, testing at XYZ would have required hiring an army of test engineers (which would have been prohibitively expensive even had we been able to find enough qualified applicants).
Not everything can be automated, because of budgetary or technical reasons. In addition, exploratory testing is invaluable and should not be neglected. It should be noted, however, that these drawbacks are shared by every other test automation tool currently available, and most of the other automation tools that can rival Selenium’s automation prowess are commercial products that cannot match its price: free.
Good development practices are key to any automation effort. Use an object-oriented approach. As you build your library of test objects, adding new tests becomes easier. A domain specific language helps make business-facing tests understandable to customers, while lowering the costs of writing and maintaining automated test scripts.
Good object-oriented design isn’t the only key to building a suite of maintainable automated tests that pay off. You also need to run the tests often enough to get the feedback your team needs.. Whatever tools we choose must be integrated with our build process. Easy-to-interpret results should come to us automatically.
The tools we choose have to work on our platforms, and must share and play well with our other tools. We have to continually tweak them to help with our current issues. Is the build breaking every day? Maybe we need to hook our results up to an actual traffic light to build team awareness of its status. Did a business-facing test fail? It should be plain exactly what failed, and where. We don’t have extra time to spend isolating problems.
These concerns are an essential part of the picture, but still only part of the picture. We need tools that help us devise test environments that mimic production. We need ways to keep these test environments independent, unaffected by changes programmers might be making.
Building test infrastructure can be a big investment, but it’s one our agile team needs to make to get a jump on test automation. Hardware, software, and tools need to be identified and implemented. Depending on your company’s resources, this might be a long-term project. Brainstorm ways to cope in the short term, while you plan how to put together the infrastructure you really need to minimize risk, maximize velocity, and deliver the best possible product.
Managing Automated Tests
Let’s say we need a way to find the test that verifies a particular scenario, to understand what each test does, and to know what part of the application it verifies. Perhaps we need to satisfy an audit requirement for traceability from each requirement to its code and tests. Automated tests need to be maintained and controlled in the same way as production source code. When you tag your production code for release, the tests that verified that functionality need to be part of the tag.