We usually minimize test automation at the GUI layer, but there are situations where more GUI automation is appropriate. If the user makes a change at X, what else changes? Some problems only manifest themselves at the GUI level. Lisa tested a bug fix that addressed a back-end problem when retirement plan participants requested a distribution of money from their accounts. The change was surrounded by unit tests, but it was a GUI regression test that failed when the distribution form failed to pop up upon request. Nobody anticipated that a back-end change could affect the GUI, so they probably wouldn’t have bothered to test it manually. That’s why you need GUI regression tests, too.
We’ve talked about some disadvantages of record/playback tools, but they’re appropriate in the right situation. You may be using a record/playback tool for a good reason: Maybe your legacy code already has a suite of automated tests created in that tool, your team has a lot of expertise in the tool, or your management wants you to use it for whatever reason. You can use recorded scripts as a starting point, then break the scripts into modules, replace hard-coded data with parameters where appropriate, and assemble tests using the modules as building blocks. Even if you don’t have much programming experience, it’s not hard to identify the blocks of script that should be in a module. Login, for example, is an obvious choice.
Record/playback may also be appropriate for legacy systems that are designed in such a way that makes unit testing difficult and hand-scripting tests from scratch too costly. It’s possible to build a record and playback capability into the application, even a legacy application. With the right design, and the use of some human-readable format for the recorded interaction, it’s even possible to build playback tests before the code is built.
GUI Test Automation: From the Dark Ages to Successful Automation in an Agile Environment
Pierre Veragen, SQA Lead at iLevel by Weyerhaeuser, explains how his team used a tool that provided both record/playback and scripting capability productively in a waterfall environment, and then leveraged it when the company adopted agile development.
Back in our waterfall development days, in 2000, we started doing GUI test automation using a record-playback approach. We quickly accumulated tens of thousands of lines of recorded scripts that didn’t meet our testing needs. When I took over 18 months later, I quickly became convinced that the record-playback approach was for the dinosaurs.
When we had a chance to obtain a new test automation tool at the end of 2003, we carefully evaluated tools with these criteria in mind: record capability to help us understand the scripting language, and the ability to build an object-oriented library to cover most of our needs, including test reporting. At the time, TestPartner from CompuWare fulfilled all of our requirements.
We started using TestPartner on a highly complex, CAD-with-engineering application, built in Visual Basic 6, still using a waterfall process. Before we started automating tests, our releases were quickly followed by one or more patches. We focused our automation efforts toward checking the engineering calculations through the GUI, and later, the actual position of the CAD details. These tests included hundreds of thousands of individual verification points, which could never have been done by hand. Within a year, having added a solid set of manual tests of the user interaction, in addition to our automated tests, we were releasing robust software without the usual follow-up patches. We felt confident about our combination of manual and automated tests, which didn’t include a single line of recorded scripts.
In 2004, our group moved to Visual Basic .NET. I spent several months adapting our TestPartner library to activate .NET controls. In 2006, we adopted an Agile methodology. Building on lessons previously learned in the non-Agile world, we achieved astonishing results with test automation. By the end of 2006, team members were able to produce maintainable GUI test scripts and library components after just a few days of training. At the same time, the team embraced unit testing with NUnit and user acceptance tests with FitNesse.
As of this writing, issues are caught at all three levels of our automated testing: Unit, FitNesse, and GUI. The issues found by each of the three testing tiers are of a different nature. Because everything is automated and triggered automatically, issues are caught really fast, in true Agile fashion. Each part of our test automation is bringing value.
Some people feel resources would be better spent on architecture and design, so that GUI test automation isn’t needed. In our development group, each team made its own decision about whether to automate GUI tests.