We encourage development teams to develop standards and guidelines that they follow for application code, the test frameworks, and the tests themselves. Teams that develop their own standards, rather than having them set by some other independent team, will be more likely to follow them because they make sense to them.
The kinds of standards we mean include naming conventions for method names or test names. All guidelines should be simple to follow and make maintainability easier. Examples are: “Success is always zero and failure must be a negative value,” “Each class or module should have only one single responsibility,” or “All functions must be single entry, single exit.”
Standards for developing the GUI also make the application more testable and maintainable, because testers know what to expect and don’t need to wonder whether a behavior is right or wrong. It also adds to testability if you are automating tests from the GUI. Simple standards such as, “Use names for all GUI objects rather than defaulting to the computer assigned identifier” or “You cannot have two fields with the same name on a page” help the team achieve a level where the code is maintainable, as are the automated tests that provide coverage for it.
Maintainable code supports shared code ownership. It is much easier for a programmer to move from one area to another if all code is written in the same style and easily understood by everyone on the team. Complexity adds risk and also makes code harder to understand. The XP value of simplicity should be applied to code. Simple coding standards can also include guidelines such as, “Avoid duplication—Don’t copy-paste methods.” These same concepts apply to test frameworks and the tests themselves.
Maintainability is an important factor for automated tests as well. Test tools have lagged behind programming tools in features that make them easy to maintain, such as IDE plug-ins to make writing and maintaining test scripts simpler and more efficient. That’s changing fast, so look for tools that provide easy refactoring and search-and-replace, and for other utilities that make it easy to modify the scripts.
Database maintainability is also important. The database design needs to be flexible and usable. Every iteration might bring tasks to add or remove tables, columns, constraints, or triggers, or to do some kind of data conversion. These tasks become a bottleneck if the database design is poor or the database is cluttered with invalid data.
Lisa’s Story
A serious regression bug went undetected and caused production problems. We had a test that should have caught the bug. However, a constraint was missing from the schema used by the regression suite. Our test schemas had grown haphazardly over the years. Some had columns that no longer existed in the production schema. Some were missing various constraints, triggers, and indices. Our DBA had to manually make changes to each schema as needed for each story instead of running the same script in each schema to update it. We budgeted time over several sprints to recreate all of the test schemas so that they were identical and also matched production.
—Lisa
Plan time to evaluate the database’s impact on team velocity, and refactor it just as you do production and test code. Maintainability of all aspects of the application, test, and execution environments is more a matter of assessment and refactoring than direct testing. If your velocity is going down, is it because parts of the code are hard to work on, or is it that the database is difficult to modify?
Interoperability
Interoperability refers to the capability of diverse systems and organizations to work together and share information. Interoperability testing looks at end-to-end functionality between two or more communicating systems. These tests are done in the context of the user—human or a software application—and look at functional behavior.
In agile development, interoperability testing can be done early in the development cycle. We have a working, deployable system at the end of each iteration so that we can deploy and set up testing with other systems.
Quadrant 1 includes code integration tests, which are tests between components, but there is a whole other level of integration tests in enterprise systems. You might find yourself integrating systems through open or proprietary interfaces. The API you develop for your system might enable your users to easily set up a framework for them to test easily. Easier testing for your customer makes for faster acceptance.
In Chapter 20, “Successful Delivery,” we discuss more about the importance of this level of testing.
In one project Janet worked on, test systems were set up at the customer’s site so that they could start to integrate them with their own systems early. Interfaces to existing systems were changed as needed and tested with each new deployment.