Chapter 11, “Critiquing the Product Using Technology-Facing Tests, shows an example of the results produced by the performance test tool chosen, JMeter.
Choosing Tools
We’re lucky to have an already vast range and ever-growing set of tools to choose from: home-grown, open source, vendor tools, or a combination of any, are all viable alternatives. With so many choices, the trick is knowing where to look and finding time to try tools out to see if they fit your requirements. Because we can’t predict the future, it may be hard to judge the ROI of each potential solution, but an iterative approach to evaluating them helps get to the right one.
Does your application present unique testing challenges, such as embedded software or integration with outside systems? Do team members have the skills, time, and inclination to write their own test framework or build one on top of an existing open source tool? If so, home-grown test tools may be the best fit.
A happy result (or perhaps a major success factor) of agile development is that many programmers are “test infected.” Today’s development tools and languages make automation frameworks easier to build. Ruby, Groovy, Rails, and many languages and frameworks lend themselves to automation. Existing open source tools such as Fit and HtmlUnit can be leveraged, with custom frameworks built on top of them.
Home-grown tools have many advantages. They’re definitely programmer-friendly. If your team is writing its own automation frameworks, they’ll be precisely customized to the needs of your development and customer teams, and integrated with your existing build process and other infrastructure—and you can make them as easy to execute and interpret results as you need.
Home-grown doesn’t mean free, of course. A small team may not have the bandwidth to write and support tools as well as develop production code. A large organization with unique requirements may be able to put together a team of automation specialists who can collaborate with testers, customers, programmers, and others. If your needs are so unique that no existing tool supports them, home-grown may be your only option.
Many teams who wrote their own tools have generously made them available to the open source community. Because these tools were written by test-infected programmers whose needs weren’t met by vendor tools, they are usually lightweight and appropriate for agile development. Many of these tools are developed test-first, and you can download the test suite along with the source code, making customization easier and safer. These tools have a broad appeal, with features useful to both programmers and testers. The price is right, although it’s important to remember that purchase price is only a fraction of any tool’s cost.
Not all open source tools are well documented, and training can be an issue. However, we see seminars and tutorials on using these tools at many conferences and user group meetings. Some open source tools have excellent user manuals and even have online tutorials and scheduled classes available.
If you’re considering an open source solution, look for an active developer and user community. Is there a mailing list with lots of bandwidth? Are new features released often? Is there a way to report bugs, and does anyone fix them? Some of these tools have better support and faster response on bugs than vendor tools. Why? The people writing them are also using them, and they need those features to test their own products.
See Chapter 9, “Toolkit for Business-Facing Tests that Support the Team,” for more about open source test automation tools.
Commercial tools are perceived as a safe bet. It’s hard to criticize someone for selecting a well-known tool that’s been around for years. They’re likely to come with manuals, support, and training. For testers or other users who lack a technical background, the initial ramp-up might be faster. Some are quite robust and feature-rich. Your company may already own one and have a team of specialists who know how to use it.
Although they are changing with the times, vendor tools are historically programmer-unfriendly. They tend to use proprietary scripting languages that programmers don’t want to spend time learning. They also tend to be heavyweight. The test scripts may be brittle, easily broken by minor changes to the application, and expensive to maintain. Most of these tools are recording scripts for subsequent playback. Record/playback scripts are notoriously costly from a maintenance perspective.
Elisabeth Hendrickson [2008] points out that specialized tools such as these may create a need for test automation specialists. Silos such as these can work against agile teams. We need tools that facilitate test-first, rather than test-last development. Test tools shouldn’t stand in the way of change.