If the devil is in the details, then what is in the plan?

John D. McGregor

I have recently spent some time reflecting on the diversity of pieces that make up today’s applications. Traditional analysis techniques don’t adequately address the potential defects in automatically generated code from IDL-to-whatever translators nor the flexibility in program structure introduced by dynamically linked libraries (DLLs) and classloaders. The result is a test plan that does not adequately cover the full range of system behavior.

It is human nature to focus on those things that are most obvious. The instances of developer-defined classes that are explicitly created and represented by objects in our system usually are given the most attention. Class objects and meta-class instances are just two examples of objects whose contributions to the application are not always highly visible. As a result, these objects are often not considered in project test plans. There are other objects that are even less obvious but just as important to the quality of an application.

In this column I want to begin a discussion of a framework for analyzing software and other development workproducts. I will present a classification scheme that supports a more comprehensive analysis of the project artifacts. It assists the tester in identifying "objects" in the application that might not ordinarily be tested. The classification actually is useful beyond just the objects in the system, as I will illustrate later. Before discussing the framework, I want to cover a little background.

Classes, Instances and Objects

I want to first present a few definitions that are necessary to fully understand the rest of this column. Many of you will already know this and will want to skip to the next section, but I just want to establish my perspective on this easily confused area.

A class is a set of conceptually related entities. The class definition that is written in the programming language, at some stage of the development process, is predominantly the description of one instance or member of that set. The instance is defined by describing the actions that it can carry out and the state that it maintains.

In most cases the class definition also contains definitions needed for the class as a unit. For example, most class definitions define constructors, methods that create instances rather than being behavior of a specific instance. In addition to these class methods, the definition may also specify state attributes that apply to the class as a unit. A common example is the need to track how many instances have been created.

So far, no objects! An object is an implementation artifact that is used as the container for a definition that is to be given operational form. I discussed the definitional and operational aspects of classes in an earlier column. Definitional aspects are typically bound earlier in the development process than are the operational aspects. What should be part of the definition and what should be allowed to vary during operation of the system is an aspect of design that is often not explicitly captured in design documents. I will discuss this more below.

Objects are able to send and receive messages. The object invokes the appropriate method definition based on the message that it receives. Each object contains the current value of each attribute specified in the definition for which the object is the realization. It is the interaction of these methods and state attributes that warrant the most attention during testing.

Most objects will represent an instance, that is a member, of a class. In some languages, objects are also used to represent class information. This actually happens in all object-oriented languages either implicitly or explicitly. Static attributes and methods in languages such as C++ and Java are the visible evidence of these class objects. There is only one copy of this class information even though the information is shared by all of the instances of the class. In fact, in some languages and depending upon the visibility attribute, the class object’s attributes can be accessed without any instance objects being created. Also the constructors can be thought of as belonging to this class object. Listing 1 illustrates these ideas in a class that implements the Singleton design pattern.

Listing 1: A Class Definition

public class OneOfAKind{

public static getInstance(){

if (only == null){

only = new OneOfAKind();

}

return only;

}

public …

protected OneOfAKind(){..}

private static OneOfAKind only;

}

 

I think it is useful to make the distinction that a class is a conceptual abstraction while an object is a conceptual realization. I prefer this view to the one that says a class is a data abstraction. In a data abstraction, the methods are associated with a class because they are the allowed operations on the data. This makes the data attributes the driving design force. My view of "object-oriented" design weighs both the behavior and the data equally in the design process. But I digress.

 

Two classifications

I would now like to consider the static/dynamic and definitional/operational classifications, keeping the class/instance/object distinction in mind. Each of these classifications is in fact a continuum not a binary choice. I will attempt to do these characteristics justice but I don’t claim to cover all situations or all combinations of the two.

Static vs. Dynamic

The static – dynamic continuum, Figure 1, encompasses several levels at which names and values are bound together. At the extreme in the static direction are those bindings that are determined prior to compilation time. In fact some definitions are actually fixed during the analysis phase of development. Pi is an example of this earliest binding.

At the other extreme are bindings that occur during execution. For example, in response to a message, a CORBA server is instantiated to handle a request. The server that is instantiated is based on a registration with the Naming Service. Thus the actual server implementation that is instantiated can be changed anytime up to the moment that the request for service is received.

Between these two extremes are several levels at which binding may occur. Static linking has long been used to avoid recompilation of modules that haven’t changed. Modules are linked into a single executable. Dynamically linked libraries were introduced to delay the time when certain facets such as specific hardware details or the language of the user interface had to be defined.

Java provides an even more dynamic environment. Classloaders actually determine the individual class definition to be used when a request for instantiation is encountered during system execution. This provides for improved flexibility and gives a whole new meaning to "distributed systems", but it also introduces a new set of testing problems. Beyond being granted permission to execute an application, security exceptions may arise during the execution when the application does not have permission to load specific classes.

The artifact’s position on the static/dynamic continuum determines when the validity of the binding should be established. The validity can not be established until the binding is complete.

Figure 1: Static vs Dynamic Cotinuum

 

Definitional vs. Operational

The definitional – operational continuum, Figure 2, represents the different ways in which a specification is defined or realized. As discussed above, the class definition is the basis for creating instances but is not executed itself. The class definition provides what might, at first glance appear to be the extreme point on the definitional end of the continuum; however, in some languages such as CLOS, the structure and content of this definition is defined by a meta-class. The meta-class approach may theoretically extend the continuum indefinitely in the definitional direction since it is possible to have a meta-class for any class including meta-classes.

Languages that support the meta-class approach provide a means by which the program may be self-modifying. When changes are made to a class’s definition, the behavior of all instances of the class also change immediately. This case illustrates that just because something is definitional does not have to mean that it is also static, in the terms of the previous section.

The operational end of this continuum corresponds to the concept that an object is the basis of the actions taken in the system. An object provides the mechanisms needed to receive messages, dispatch methods and return results. It also associates instance attributes with methods. This information may be static as in a C++ object or it may be more dynamic in the case of a CLOS object that contains arbitrary slots.

The position on this continuum determines how the validity of the specification should be examined. The definitional end of the spectrum requires an inspection approach to validation while the operational end of the spectrum supports the execution of actual code.

Figure 2: Operational vs Definitional Continuum

 

So What?

How does this discussion relate to testing? These two dimensions are useful in performing the analysis necessary required for planning the testing process for a development project. Figure 3 shows the plane defined by the intersection of the two continua. I will use the quadrants in this plane to structure a discussion on analyzing the types of objects to be tested. Keep in mind that these are not binary choices so a "quadrant" may contain some strange bedfellows as one moves from the outer reaches of a continuum toward the center.

For simplicity I want to assume that the test plan to be constructed covers development in a single language and that debug testing has been chosen as the primary testing approach rather than operational testing.

Figure 3: Dimensions

 

I have mentioned previously that each step in the development process should include a validation activity to ensure the quality of the product being produced in that step. I have mainly presented techniques for analyzing code to determine the tests to be executed. The framework begun by these two continua provide a means to go beyond this basic approach and define a more comprehensive analysis that encompasses all of the objects that comprise a system. The analysis presented below will identify artifacts that should be inspected as well as code that should be executed under test conditions.

In this discussion I will place some artifacts in multiple places because different environments and different languages handle the same artifact. For example, I have already mentioned that in languages such as C++, classes are associated only implicitly with objects while in Smalltalk the relationship is explicit.

An example for context

Today’s systems blend a number of technologies to achieve an ambitious set of objectives. Object-oriented techniques are used to provide separation between the definitional and operational portions of the system. Reflective techniques that support dynamic definitional characteristics to adapt to a wide range of operational profiles and conditions. Systems that must be "always available" use a combination of dynamic and static techniques to support the physical distribution of processes. DLLs use dynamic operational techniques to achieve platform portability.

Consider a real-time data monitoring system for on-board satellite control. The system must operate reliably for long periods of time even as the space craft ages and its operational profile changes. There is a requirement for the system to monitor itself and to make modifications in behavior to accommodate changes in the satellite’s hardware operation. The goal is to establish a network of similar satellites; however, later satellites will take advantage of newer technologies so that certain processors may be different. Critical board-level drivers will be delivered as DLLs to address these hardware differences. The system is made fault tolerant by having redundant hardware with distributed software monitoring the operation and maintaining a mirror image of the operational state.

Systems of this type will require a variety of testing strategies to address the full spectrum of project objectives. I want to consider the plane defined by the intersecting lines in Figure 3 and consider where various features of this application fall and what implications that has for testing the system.

 

Static Definitional

Designs, specifications and other abstract products that can not be directly changed or manipulated at runtime populate this section of the plane. This includes those portions of class definitions that are not modified at runtime. It also includes abstract artifacts such as interfaces. On one recent project the daily build became more and more difficult to achieve. Why? Individual developers were independently changing the interfaces that they provided. These changes were necessary, but they were necessary because the original interfaces had not been sufficiently tested before code was being created.

How do you test a Java interface (or a C++ abstract class)? The approach used at one company was to conduct tests that exercised objects but that focused just on the methods defined in the one specific interface. They were defining test cases that attempted to validate the static, definitional interface indirectly by executing operational objects. A clear disadvantage of this approach is the need to have code prior to any testing activity. A second disadvantage is the need to distinguish failures due to incorrect implementation from failures due to the interface definition. The canonical example is that we typically speak of "class testing" when we really analyze the class definition but then utilize instance objects to operationalize the tests.

A test adequacy criteria in this area could be stated as: An adequate test set will include scenarios that utilize every portion of the definition under test. For a C++ abstract class, this means that the script for a design review/test should include scenarios that utilize each method.

Another category of static definitional products is class definitions that are only indirectly created by the developer. One example is a parameterized class definition (a template in C++). One of the concerns about templates relates to their use rather than an individual definition. The concern is that a template will be instantiated around a parameter that does not implement all of the required methods. Languages such as Ada provide for type checking of the parameters while others do not[1]. This should be an item on the code review checklist. The reviewer should ensure that all instantiations of the template are safe in that all required methods are supplied. A second example of indirectly generated classes is one generated by translators and code generators. IDL-to-(name your favorite language) translators generate numerous class definitions. This entire area will be the topic of a column soon.

The model testing process that I described in a previous column[3] is one approach to testing products that can not be directly executed. It is basically a "guided inspection" process. Test cases are constructed based on some test case selection strategy. These test cases are "executed" by hand by the inspection team. In the case of an interface, the test plan should interact the interface with as many clients as possible. The advantage of this approach is that it can be applied early in the development cycle. The disadvantage is the level of human resource required to execute the plan.

Static definitional products are validated directly by early guided inspections and indirectly by executable tests of the objects created from the definitions.

Static Operational

Those operational artifacts that are fixed prior to execution, such as "automatic" objects, are found in this portion of the plane. This is one of the most thoroughly tested segments of an application. Typically an instance object is created and subjected to a barrage of messages. The results returned by these messages are examined to determine whether the object has performed correctly or not. Remember that I am not using static and dynamic in the exact usual sense. Therefore this area includes those objects declared as "automatic" in languages such as C++ as well as objects created dynamically on the heap provided they are created from class definitions that are "fixed" prior to program execution. For automatically instantiated attributes, don’t forget that much implicit action happens as these attributes go out of scope.

Basically, static operational workproducts are validated by executing test cases. Certain operational faults are not visible until actual objects are created and begin operation. Operational faults such as memory exhaustion or data structure overflow will not be evident until at least the class test phase if not the integration test phase. An adequacy criteria could be stated as: An adequate test set will include a representative sample of instances of each class under test.

The operational entities in a system are created based on the definitional entities. Thus every object is created according to some class definition. Usually virtually an infinite number of operational entities can be created from a definitional entity. This then requires a sampling procedure to select a finite set of objects for testing.

Static operational products are validated directly through the execution of test cases, but only a sample of such artifacts are subjected to the tests.

Dynamic Definitional

In this quadrant of the plane are those definitional artifacts that are an active part of the executable. This includes meta-architectures that provide reflective capabilities that modify program behavior. Strictly speaking the definition is usually not changed, it is just that one previously written definition is substituted for the default definition. (This ignores interpreted languages such as Smalltalk and object-oriented Lisps where the definition may actually be changed during execution.) By changing the definition, the behavior of all instance objects created from the class definition are changed. I might, for example, add a new interface to a class thereby allowing its instances to be passed to a particular server.

Dynamic definitional artifacts and dynamic operational artifacts are closely related because it is not usually possible (or logical) to change the definition of a class without also changing its operational behavior. On the other hand, it is often the case that the operational portion of the artifact is changed without changing the definition. DLLs are often used to provide, at runtime, a different implementation for a specification that was provided at compile time.

How thoroughly do we test "class objects"? By "class object" I mean that portion of the class definition that is labeled as "static" in C++ and Java programs. These "static" declarations provide a state and operations on that state that is dynamically modified during execution just as an instance object is.

The complete test plan for a class should define exactly how both instance objects and the class object will be used to operationally test the static class definition. The test cases for the class object are constructed exactly as those for instance objects are. Each static method’s signature is analyzed to identify boundary conditions and other landmarks that define functional test cases. The implementation of each class method would be analyzed to provide structural test cases. A test set coverage criteria would be: An adequate test set will exercise every class method and visit every state of the class object.

Dynamic definitional entities can be tested directly by inspection of the families of related definitions that might be substituted at runtime. Sets of test cases that sample from the possible alternative definitions can be used to guide this inspection. The focus of indirect testing for scenarios in this quadrant is on the interaction of an object created from the definition under test with objects from associated classes. The objective is to determine whether the appropriate definitions are selected during execution and whether the definition changes the behavior of its instances sufficiently to ensure correct operation.

Dynamic Operational

Dynamic operational entities are those in which the bindings for individual objects can change at execution time. These changes in bindings are usually limited to pointer or reference variables. This has always been an area that was difficult to address because of the difficulty in tracing values as they are passed as parameters. In a typical object-oriented program this is exacerbated by the numerous short methods that we use.

The dynamic operational area focuses on references that can refer to any one of a number of objects that are equivalent under polymorphic substitution. This should indicate a general coverage criteria: an adequate test set will cover all available implementations that can be legitimately substituted for the specified object. To specialize this to polymorphism: an adequate test set will cover all implementations of all overridden methods in an inheritance family.

One example of a dynamic operational structure is meta-data. For example, many workflow products allow users to define new screens. A screen definition is represented in the application as an instance object rather than a class definition. Both the functionality to define a new screen and the functionality that presents the new screen and processes the data entered through it must be tested.

One of the most important issues in this segment is the possibility that dynamic operational entities will go away and no longer be addressable or conversely that they will not go away at all. This indicates a need to consider object life cycle in a test plan. That is, the test plan should include scenarios that explicitly trace an object from its creation to its destruction.

Summary of Quadrants

Since I have rambled more than usual, let me recap some of the important points that are summarized in Table 1.

  1. The more static an element is, the earlier it becomes unchanging and the earlier it should be tested.
  2. The more dynamic an element is, the more useful it is to sample alternatives during testing.
  3. For object-oriented software this sampling can often be constrained to a single inheritance family.

  4. Meta-level definitions introduce a new level of logic that adds a dynamic aspect to the application. This new logic also imposes additional testing requirements.
  5. Operational artifacts can be tested by executable tests.
  6. Definitional artifacts must be inspected in the context of specific test cases.

 

Table 1: Testing Activities

 

Static

Dynamic

Definitional

Inspect

Inspect all alternatives

Operational

Execute each binding

Execute all alternative bindings

 

 

A Test Plan for ALL Objects

Well, actually I can realistically only test every type of object that will populate the final application. In previous columns I have focused on the easy targets. The techniques I have presented in previous columns have tested all of the methods defined in the application classes and as a consequence we have tested all of the code contained in the instance objects and some of the code contained in the class objects. The analysis framework presented in this column provides a basis for a much more complete analysis and thus a more complete project test plan. I will briefly outline how the framework shapes the project test plan structure.

  1. Construct review/test scripts for each template, abstract class and interface in the design.
  2. Identify architectural features that support dynamic aspects.
  1. Construct review/test scripts that guide the tests of the specification without regard to a specific implementation.
  2. Construct tests that configure the system with each implemented alternative and verify the result.
  1. Construct tests for all developer-defined classes
  2. Identify tool-generated classes; add tests to cover the functionality in these classes not covered during the testing of developer-defined classes.
  3. Identify those points in the design where reference is made to a DLL or components stored in a repository. Construct scenarios that cover known equivalencies where substitution will be permitted.

This is a very brief outline that you should be able to expand based on the material above and the specifics of your application. The framework should remind you to look at a variety of criteria when selecting test cases.

Prioritizing

Development organizations and even individual projects within the same organization will emphasize some parts of the classification plane more than others depending upon the nature of the product and the goals of the project. In this section I want to discuss a few of the characteristics that influence the setting of priorities. Table 2 summarizes these characteristics in the context of the framework.

Process maturity – Organizations that have advanced beyond the chaotic process level [4] recognize the importance of early inspection of artifacts such as use cases and designs. Climbing through the levels of process maturity will result in increasing attention being given to the more definitional artifacts such as designs and class definitions. Recently a project leader summed up his reaction to a lengthy discussion that we were having about the relative amount of time to be spent in each development phase by saying, "But ultimately we must deliver the code." That focus has his organization firmly in a Level 0 process. His organization doesn’t even produce very many design documents much less inspect them!

Complexity – The programming techniques required to support dynamic modification to definitions introduce a higher level of complexity into the software than designs that do not use introspective techniques. Consider the difference between writing a program that solves a specific set of differential equations and a program that automatically writes programs to solve differential equations. Typically these features will be more carefully tested than other parts of the program because the programmer anticipates more defects in the introspective code than in the less complex portions of the code. Selecting test cases for this type of code is difficult because the complexity confounds analysis techniques and makes determining adequate coverage difficult as well.

Strategic significance – The strategic significance of a specific system behavior is often related to its flexibility. Being able to introduce different language versions in a range of countries quickly or to offer varying levels of functionality (standard, professional and integrated suite) quickly is a competitive advantage. This requires a dynamic environment in which DLLs adapt the system to various OS/windowing systems while core functionality is undisturbed. The large number of combinations of swappable features requires a statistical approach to testing such as OATS [2].

 

Table 2: Priorities

 

Static

Dynamic

Definitional

  • Priority of this increases directly with CMM level
  • Given highest priority by low complexity, high reliability projects

 

  • Typically most complex
  • Very strategic to success of the application
  • Given highest priority by projects that include this technology in an application

Operational

  • Relatively low complexity
  • Given highest priority by projects that value code over design
  • Second most complex, but well understood technology
  • Given highest priority by projects that heavily utilize polymorphism
  •  

     

    Summary

    In this column I have presented a framework for classifying the variety of architectural, design, and implementation techniques that are being used to build object-oriented, component-based systems. These approaches have implications for every level of testing. The two continua I discussed here have proven useful in test planning for a number of development organizations.

    The framework guides the development of test cases that fully examine the application under test. The framework provides a natural integration of the guided inspection testing of non-executable products with the dynamic testing of executable products. The tester is guided through both the familiar testing arenas such as statically declared instance objects and through newer areas such as meta-levels.

    In future columns I will spend additional time detailing testing techniques based on the framework presented here and I will discuss additional dimensions for the framework. One additional dimension is the continuum from infrastructure to application interface. A second is the distance between the objects that must communicate. The goal is to have a framework that guides the tester to select a test set that is comprehensive and effective at identifying defects.

     

    References

    1. Ada 95 Language Reference Manual, http://www.adahome.com/rm95/
    2. Robert McDaniel and John D. McGregor. Testing the Polymorphic Interactions Between Classes, Clemson University Dept. of Computer Science, Technical Report 94-103, March 1994.
    3. John D. McGregor. The Fifty Foot Look at Analysis and Design Models, Journal of Object-Oriented Programming, July/August 1998.
    4. M. C. Paulk, Bill Curtis and M. B. Chrisis. The Capability Maturity Model for Software, Version 1.1, Software Engineering Institute, Technical Report, CMU/SEI-93-TR, Feb 24, 1993.