Beware: Running a thousand tests without any serious problems doesn’t mean you have reliable software. You have to run the right tests. To make a reliability test effective, think about your application and how it is used all day, every day, over a period of time. Specify tests that are aimed at demonstrating that your application will be able to meet your customers’ needs, even during peak times.
Ask the customer team for their reliability criteria in the form of measurable goals. For example, they might consider the system reliable if ten or fewer errors occur for every 10,000 transactions, or the web application is available 99.999% of the time. Recovery from power outages and other disasters might be part of the reliability objectives, and will be stated in the form of Service Level Agreements. Know what they are. Some industries have their own software reliability standards and guidelines.
Driving development with the right programmer and customer tests should enhance the application’s reliability, because this usually leads to better design and fewer defects. Write additional stories and tasks as needed to deliver a system that meets the organization’s reliability standards.
Your product might be reliable after it’s up and running, but it also needs to be installable by all users, in all supported environments. This is another area where following agile principles gives us an advantage.
Installability
One of the cornerstones of a successful agile team is continuous integration. This means that a build is ready for testing anytime during the day. Many teams choose to deploy one or more of the successful builds into test environments on a daily basis.
Automating the deployment creates repeatability and makes deployment a non-event. This is exciting to us because we have experienced weeks of trying to integrate and install a new system. We know that if we build once and deploy the same build to multiple environments, we have developed consistency.
Janet’s Story
On one project I worked on, the deployment was automatic and was tested on multiple environments in the development cycle. However, there were issues when deploying to the customer site. We added a step to the end game so that the support group would take the release and do a complete install test as if it were the customer’s site. We were able to walk through the deployment notes and eliminated many of the issues the customer would have otherwise seen.
—Janet
As with any other functionality, risks associated with installation need to be evaluated and the amount of testing determined accordingly. Our advice is to do it early and often, and automate the process if possible.
Chapter 20, “Successful Delivery,” has more on installation testing.
“ility” Summary
There are other “ilities” to test, depending on your product’s domain. Safety-critical software, such as that used in medical devices and aircraft control systems, requires extensive safety testing, and the regression tests probably would contain tests related to safety. System redundancy and failover tests would be especially important for such a product. Your team might need to look at industry data around software-related safety issues and use extra code reviews. Configurability, auditability, portability, robustness, and extensibility are just a few of the qualities your team might need to evaluate with technology-facing tests.
Whatever “ility” you need to test, use an incremental approach. Start by eliciting the customer team’s requirements and examples of their objectives for that particular area of quality. Write business-facing tests to make sure the code is designed to meet those goals. In the first iteration, the team might do some research and come up with a test strategy to evaluate the existing quality level of the product. The next step might be to create a suitable test environment, to research tools, or to start with some manual tests.
As you learn how the application measures up to the customers’ requirements, close the loop with new Quadrant 1 and 2 tests that drive the application closer to the goals for that particular property. An incremental approach is also recommended for performance, load, and other tests that are addressed in the next section.
Performance, Load, Stress, and Scalability Testing
Performance, load, stress, and scalability testing all fall into Quadrant 4 because of their technology focus. Often specialized skills are required, although many teams have figured out ways to do their own testing in these areas. Let’s talk about scalability first, because it is often forgotten.
Scalability
Scalability testing verifies the application remains reliable when more users are added. What that really means is, “Can your system handle the capacity of a growing customer base?” It sounds simple, but really isn’t, and is a problem that an agile team usually can’t solve by itself.