A Component Testing Method
John D. McGregor
In the past few columns I have presented techniques for selecting, constructing and organizing component-level test cases. In this column I want to provide a component-level testing method and use it to provide a context for the techniques discussed previously and to consider some strategic testing issues. Along the way we will have the opportunity to talk about quality as well as testing.
I was visiting a client recently who asked why, when there are so many software development methods, there was no software testing method that was as widely accepted. My answer was that component-level testing is so closely connected to development that the testing method must be tailored to fit the development method. Since some of the development methods are merging, I was encouraged to consider abstracting from several sources to describe a useful, albeit high level, testing method.
Inputs to Component-level Testing
The models, information and assumptions that are the primary inputs to component-level testing are also some of the fundamental forces shaping the component-level testing method. Testing is seldom generously funded and so the materials available from the development process will not be regenerated in a different format just to support testing and only a relatively small amount of material will be created exclusively for testing.
Project Test Plan - This plan will provide information such as the expected levels of specification and code coverage. These values will be used to determine how many test cases to construct.
System Requirements - As I stated in a previous column[McGregor], a sufficiently detailed and structured use case model can assist in determining the required behaviors for individual domain-level classes. The use case model is also used to create the use profile that is used to determine which parts of a component should be tested more than other parts.
Component Specifications - Three types of properties should be included in a comprehensive component specification from which the functional test cases will be built. Individual operations are specified in terms of constraints on their inputs and outputs. These are expressed as pre and post-conditions. The state of the object is constrained by an invariant that specifies limits on each of the attributes of the object. The state-transition diagram defines specific sequences of operations that represent the object’s protocols. Finally, the interactions among methods and attributes are specified by a series of functional models usually documented using object interaction diagrams. This constrains how the methods, that implement the component’s operations, interact with each other either directly or indirectly through the objects that implement the component’s attributes.
Component Implementations - The actual implementation of the design provides the information necessary to construct the structural and interaction test cases. The language used is an important factor in the quality of the implementation because some types of faults are made impossible by the language and new sets of faults are introduced. For example, the infamous pointer errors are quite different between C++ and Java. The environment used to produce the code can also be useful in testing. I will illustrate this below.
The requirements and specifications inputs are significant contributors to the quality and timeliness of the end product. The comment I often get from developers is that there is not enough time to produce all of these documents. Yet the simple act of creating them locates numerous inconsistencies and incomplete specifications. Taking the time to create this information will eliminate certain potential faults early in the process, saving effort later in the development cycle.
Steps in the Method
I am going to structure this component-level testing method definition around the general Plan - Build - Execute test framework outlined by Software Quality Engineering (SQE) [Heltzel]. The component-level testing method, illustrated in Figure 1, is iterative and incremental. Each increment approaches the creation of component-level test cases from a different perspective. The first increment addresses functional tests, the second addresses structural tests and the third produces interaction tests. The iterative nature of the method accommodates the step-wise refinement approach of the development process and modifies the set of test cases until the desired levels of coverage are achieved. Each iteration accomplishes a complete Plan - Build - Execute cycle.
Figure 1: Testing Method |
Below I will address each of the basic phases of the component test method by stepping through a single iteration. At the same time I will cut across the increments to address the related activities. Since techniques for performing each of these functions either has been or will be the subject of a column, I will not make any attempt to be comprehensive in the discussion about each phase.
Plan the Tests
The Planning phase of the component testing method includes an analysis of the component under test (CUT) to produce the test requirements. This analysis can begin as soon as the component’s specification is available and continues after the final implementation is available. The inputs to this phase include specification of the component with pre and post-conditions for each method, the state-transition model for the entire component and the set of object interaction diagrams (OIDs) that constitute the component’s functional model. The outputs are test case and test data specifications that have been selected to provide specific levels of product coverage.
Create functional test cases
The very process of specifying the functional tests will help identify some faults. The earlier in the process this can be done the more useful to the project. The signature of each operation is analyzed to identify the types of information needed to test the method that implements the operation. This is the first feedback point from testing to development. Finding conflicting or inconsistent specifications while attempting to specify test cases can identify future development problems before they occur.
Two tools that are useful for creating functional tests were discussed in a previous column[McGregor].
Input Value Specifications - In this table the types, equivalence classes and boundaries for the inputs and outputs for all of the tests can be specified. This will save time later because many of the methods in a component will have parameters, temporary variables and output values that share these definitions. This table is easy to construct even if the developers have not created pre and post-conditions for the operations.
Table 1: Input Value Specifications |
||
Variable Name |
Object Type |
Equivalence Classes |
Name |
String |
1. name that exceeds the maximum length of the string 2. name that exactly matches the maximum length 3. complete name with remaining space 4. blank name
|
Person |
Personnel |
1. newly created 2. pre-existing
|
Authorization |
Security Code |
1. authorized for local access only 2. authorized for system-wide access
|
Input Data Matrix - An input data matrix is useful for planning the exact combinations of data values that will be used in each test case. Each test case is a permutation of the definitions found in the input value specifications. The percentage of the possible permutations that will actually be tested will depend upon the use profile and the importance of the component.
Table 2: Input Data Permutations |
||||||
Name |
Person |
Authorization |
||||
complete name with remaining space |
pre-existing
|
authorized for local access only |
||||
complete name with remaining space |
newly created
|
authorized for local access only |
||||
complete name with remaining space |
pre-existing
|
authorized for global access |
||||
… |
… |
… |
One technique for testing the portion of the specification provided by a state-transition model is to compare the patterns of transitions in the model, the protocols, to the patterns of messages that are actually present in the code. These sequences can be checked for conformance by examining the code by hand or by using tools such as the Browser in Microsoft C++ which I used to identify the call graph for a method, Happening::moved, see Figure 2. These listings are checked to determine whether any sequence of calls violates the pre-conditions of any of the methods.
Figure 2: Call Graph |
Supplement with structural test cases
Structural test cases are constructed in the second increment for two reasons. First, the structural tests can only be built with access to the component’s implementation. Second, given the small size of most methods, functional tests will also cover much of the implementation. Structural test cases should only be created to cover those paths not reached by the functional test cases. In figure 3 you see the output from the Profiler in Microsoft Visual C++. This listing, produced using the function coverage option, provides an indication of which functions have been executed, marked with an asterisk, for the execution of a specific test case. Aggregating this data over all the test cases allows the tester to update the test suite during the succeeding iterations to cover those lines that are not executed.
Figure 3: Profiler Output |
Module Statistics for brickles.exe ---------------------------------- Functions in module: 208 Module function coverage: 70.2%
Covered Function ---------------- . CAboutDlg::CAboutDlg(void) (brickles.obj) . CAboutDlg::DoDataExchange(class CDataExchange *) (brickles.obj) . CAboutDlg::GetMessageMap(void) (brickles.obj) . CAboutDlg::`scalar deleting destructor'(unsigned int) (brickles.obj) . CAboutDlg::`vector deleting destructor'(unsigned int) (brickles.obj) . CAboutDlg::~CAboutDlg(void) (brickles.obj) * CBricklesApp::CBricklesApp(void) (brickles.obj) * CBricklesApp::GetMessageMap(void) (brickles.obj) * CBricklesApp::InitInstance(void) (brickles.obj) |
Figure 4: OID vs Caller Graph |
|
Identify interactions among methods
The expected interactions within an object and with other objects are specified in the OIDs that constitute the class’s functional model. By waiting until the third increment to address the interaction perspective, the functional and structural test cases will have already tested most, if not all, of the method-to-method interactions (defined below). The test analyst will first complete the interaction matrix and then identify those interactions that have not been tested by other tests. The actual interactions can be viewed in the caller graph output, shown in Figure 4, from the Microsoft C++ Browser and compared with the expected interactions, as specified in the OIDs, before any tests are executed.
Interaction Matrix - The interaction matrix records the interactions among the methods in a single class or among a set of objects. For component testing the primary interest is among the methods within the component under test. There are two types of interactions that will be recorded. One method may invoke another method in the class, a method-to-method (MM) interaction. One method may message a particular class attribute and later another method may message that same attribute, a method-object-method (MOM) interaction. Table 3 shows a partial example for a Person record. In a later column I will illustrate how this matrix assists in planning tests of distributed systems. A MOM interaction will often be symmetric. That is, either message may occur first; however the MOM interaction shown here has only one allowable order. The second ordering would violate the pre-condition of the getAge method. Likewise the MM interaction is usually one-way with only one method designed to call the other.
Table 3: Interaction Matrix |
||||
setBirthDate |
getAge |
setZipCode |
validateZipCode |
|
setBirthDate |
MOM |
|||
getAge |
MOM (violation) |
|||
setZipCode |
MM |
|||
validateZipCode |
MM(invalid order) |
Build the Testing Infrastructure
The Infrastructure phase of the component-level testing method encompasses the construction of the software and the supporting material needed to conduct the tests. A number of tools are commercially available to automate the testing process but there are still specific test scripts and test data that must be implemented in a machine-usable format. The inputs to this phase are the plans created in the first phase and the basic corporate testing infrastructure. The outputs are the test databases, specific test case implementations and detailed testing instructions including checklists.
Construct data records
Many components will interact with external databases as the source of their data. The definitions of the records to store in the database used for testing come directly from the input data permutation tables. A number of separate databases will usually be constructed so that the number of different test conditions that a single database must support is limited. This is particularly important when one test condition, perhaps the presence of a record for a specific person, contradicts another condition, the absence of that person’s record.
Implement a Software Test Architecture
In the last column I discussed the Parallel Architecture for Component Testing (PACT). There are two major threads of activity here. First, a basic set of abstract classes is created. These classes provide specialized foundations for testing specific types of classes such as GUI classes, database-intensive classes and transaction monitor interactions, and portions of specialized architectures, for example the Document-View architecture of the Microsoft Foundation Classes. This part of the infrastructure should be provided to all developers on a project by some project or enterprise-level organization.
In the second major activity the person responsible for the PACT class for a specific component under test specifies and implements the test class. The test cases identified in the Planning phase are implemented as methods in the PACT class.
Create Checklists
A checklist enhances quality by ensuring that steps or criteria are not omitted. Table 4 shows a portion of one checklist from a test plan that is testing the GUI portion of each use case for a system. The checklists used in testing a set of components should cover the important non-performance issues for a system.
Table 4: Test Plan Checklist |
1.5 Tab traversal is in correct order. Y N 1.6 Shortcuts work correctly. Y N 1.7 Fields may be accessed in random order. Y N 1.8 Menus are in the correct order. Y N
|
Execute and Evaluate
The Execution and Evaluation phase of the component-level testing method includes activities such as applying the test cases to the component under test. Approaches to evaluating the results from the test cases are also included. The inputs to this phase include the plans and infrastructure created in the previous phases. The outputs from this phase include test results to be fed back to the developers as well as information needed to improve the test coverage during the next iteration.
Execute test cases
The drivers associated with the PACT structure will automatically execute the selected PACT objects. The PACT classes support the complete setup, execution and cleanup of the testing environment. This activity produces logs of test results including errors generated by the testing software.
Evaluate correctness of results
The test software should support the automatic validation of as many of the test results as possible. The anticipated results can be hard coded in the test case when a specific answer is expected. For test situations where numerous transactions are run against a database, sometimes in different orders, the environment is too complex to determine an exact answer. In this case the test results are evaluated by comparison by executing independent queries directly to the database. These queries can be embedded within the PACT class and executed as part of the test execution process.
Evaluate test process effectiveness
This is a long term process crossing all of the iterations. Two techniques can be followed here. First, measure the immediate effectiveness of the test cases by using a tool to identify those lines of the application’s source code that were not executed by the test cases. The smaller the percentage of lines not covered the more comprehensive the tests. Second, over the project life cycle we can measure the distance between where faults are injected into the application and where the testing process finds them. This measures the effectiveness of the different levels of testing (i.e. requirements testing, analysis and design model testing and component testing).
Interactions between the Methods
The interaction between the testing and development methods begins very early with the needs of the testing method being considered as initial decisions about design notations, techniques and schedules are made. Decisions must also be made about the interleaving of periods of development followed by periods of validation and feedback.
The feedback from component testers to the component developers occurs at the end of each phase in the test method. The Planning phase will often identify missing and/or inconsistent requirements and pre-conditions. The Infrastructure Construction phase will identify conflicting assumptions among test data specifications which often result from conflicting operation specifications. The Execution and Evaluation phase will, of course, identify those tests for which the component fails as well as identifying activities in the test method that can be improved.
The iterations of the test method will correspond to the iterations of the development method. The feedback from the component test method will be incorporated during each iteration. Each successive iteration of the development method will produce increasingly correct software which must, in turn, be tested. This places the additional requirements on the component test method that it be repeatable and easily modifiable.
Roles and Responsibilities
Since I am focusing only on the component testing method, there are three primary roles required for this component testing method. All of these roles may be fulfilled by one person or each role may be staffed by several individuals.
Developer - The developer of the component under test has the responsibility to create the agreed upon inputs for the testing process.
Test Analyst - The development products are analyzed to determine the test requirements. The test analyst role does not require development background, but the analyst may be either a "tester" or a developer. In either case, the person should understand and/or be trained in the testing perspective. The person should also be familiar with the overall project test plan that specifies strategies and quality goals for the project.
Test Implementer - The individual PACT test classes must be created by someone who is very familiar with the component under test. This may be the component developer, an assigned "buddy" who trades components to be tested with another developer or it may be an independent tester. The PACT abstract classes can require substantial development and testing experience to construct while the test classes are sufficiently constrained that less experience is required.
One effective combination is to have the test analyst be a different person from the one who developed the component but to have the component developer also implement the test software. The analyses used to create the test cases should be conducted by someone trained in the "testing perspective". The test software can be implemented by any developer provided the testing requirements are met.
Summary
The cliché that a chain is as strong as its weakest link certainly applies here. We can not expect to build robust applications from weak components. The component test method that I have presented is one piece of a comprehensive product testing method which in turn is one piece of a comprehensive application development method. I have attempted to illustrate some of the interactions between the methods that develop and test components. In future columns I will address several issues raised in the definition of this method.
References
[Hetzel] William C. Hetzel, The Complete Guide to Software Testing, Wellesley, Mass.: QED Information Sciences, 1984.
[McGregor] John D. McGregor, Planning For Testing, Journal of Object-Oriented Programming, February 1997.