Letís Donít and Say We Did

 

John D. McGregor

 

As I drove from the hotel to a clientís site this morning a commentator on public radio was expounding on "playoffs" in sports. One of his remarks was that maybe we could just do away with the regular season and go directly to the much more exciting playoffs. His tongue was deeply in his cheek, but it reminded me of a common situation in our business. How many managers want to go directly to the code and do away with the less "exciting" analysis and design phases of our development process? In the words of a currently popular expression, "Letís donít and say we did!"

 

This is a column about process (or the lack thereof). Please bear with me. I am going to "vent" in this column. I will attempt to keep the complaining to a minimum and the lessons learned to a maximum. Incidentally, I dedicate this column to all of the omnipotent managers who believe they can skip basic process steps or suspend basic software engineering principles and who are then totally dismayed when the project comes crashing down around them. You have made this column possible.

 

I recently worked with a manager who a few weeks into the analysis modeling process said "In the interest of time, we have to terminate this activity". This struck me as particularly interesting (actually, at the time it made me quite angry!). I wondered how it would save time to attempt to solve a problem without knowing what the problem is? That is what analysis is all about: analyzing the problem in order to better solve it. Correct? So the project personnel moved on to do the analysis of their individual packages that had been identified by a small group of architects. This effort took far longer than it should have since there was no system-level context. The confusion resulted in much churn in the architecture of the application because the application-wide analysis was aborted to "save time". This same manager aborted, ignored and mangled almost every step in the process. I wanted to say (but of course never did!), maybe we could save more time by just not writing any code. Letís donít produce a product and say we did!

 

One thing that comes to mind at this point is that we often lose sight of the fact that the steps in the standard development process are based on how humans naturally solve problems. They are not made up out of someoneís imagination and they are not added as penalty overhead by some higher authority. Each step produces a unique contribution to the success of the project. Admittedly, both sides of the process argument go to extremes. My time-saving manager would have been happy to do away with "process" altogether. On another project we were called into, the process team produced several hundred pages of process "directions". Developers spent hours following these directions with no noticeable improvement in quality of the product.

 

My partner and I once spent an afternoon attempting to construct a diagram that illustrated the variations that were possible in the development process. We gave up. That is because each development organization and each type of project will evolve a process that meets its specific goals and expectations. Any of these processes will, at their core, follow a basic sequence that I view as immutable:

 

A developer working alone on a product follows this basic process. The projects being attempted today are increasing in complexity and size so that multiple developers must cooperate to produce a product. The development process should facilitate that cooperation. Further, the pressures of "time to market" are sequencing these basic steps using concurrent, iterative, and incremental engineering approaches. The process that results should still at its core follow the basic sequence of steps while providing support for the coordination activities needed to manage the complex interactions that result.

 

In this monthís column I want to revisit the basic life cycle phases and consider the contribution each makes to the development of a quality product. The venting part comes when I discuss the implications of omitting any of the basic steps.

 

Process Phases

Ever hear someone say, "We just donít have time to continue to follow the process"? What are the implications of such a statement? Obviously the person believes that following the process takes longer than not following it. But surely if that were true, no one would ever follow a process. Of course speed is not the only consideration. Maybe a process slows you down but increases quality. Then it becomes a trade-off between speed and quality. Yet I seldom hear anyone frame the statement about abandoning the process as agreeing to lower quality. It occurs to me that either the person is just short-sighted or there is something wrong with their decision-making process. I want to consider the implications of "abandoning" the process for both schedule and quality.

Analysis

Classically, analysis is the process of developing an understanding of the problem we are trying to solve. Most methods for object-oriented analysis are comprised of two facets that are realized by concurrent activities. The first seeks to achieve an understanding of the domain(s) involved in the application and the second models the userís needs and problems. Domain analysis[8] is the technique that I use for the first and use case modeling[5] is used for the second.

 

Domain analysis produces a better understanding of the content domain(s). The team produces a set of models that make it easier for others on the project to understand the basic content of the system. The manager who cancelled the analysis modeling effort deprived his team of this shared mental model. This loss was evident in many facets of the project. Individual teams did not understand how their piece of the system fit into an the project and the interfaces they produced were incomplete.

 

Requirements analysis attempts to capture the exact behavior that the final system must exhibit. I, like many of you, use use case modeling to represent this information. In my example project, with about 15- 20 teams performing analysis, some teams had 20 use cases and others had 100. No, the work was not poorly distributed. Some teams produced very high-level use cases while others produced very detailed test cases. This difference in granularity makes the use case model difficult to use. Nor were the individual use cases integrated into a single model.

 

The work of Ian Graham[4] and Melissa Major[6] have presented techniques for taking large or high-level use cases and decomposing them into smaller, atomic, use cases. These techniques result in use cases that are relatively uniform in granularity. This provides a basis for estimation of effort and makes each use case at the same level of detail. When atomicity is combined with a well-structured (uses/extends) use case model, the result is a model of the problem to be solved that is easy to understand, maintain and modify.

 

What happens if you short change the analysis effort? From my perspective, the manager who terminated the analysis effort left his project with no adequate view of the problem that they were trying to solve. It was a very technical system and many of the developers were new to the discipline. This placed a great burden on the experienced developers who had to mentor the less domain knowledgeable people. There was no system-level correlation and coordination. The lack of coordination during the use case development resulted in a model that was not the anticipated resource that would help the project. Rather it was a source of confusion.

 

Is the manager aware of the problems that his actions caused? I doubt it. We have little data upon which to base a comparison of what "might have been". Was code written? Absolutely. Did it work? Eventually. Could he have produced the same quality with fewer resources and in less time? In my opinion yes, but companies will never be convinced until they collect data that will allow a quantitative evaluation.

 

Principle: If we donít try to understand what is known, then we canít be certain of what we donít know.

Design

My most memorable quote from a manager regarding design is: "Since the design documents are further along than the requirements, would it be all right to baseline (certify the correctness, completeness and consistency) the design before we do the requirements?" Essentially I was asked whether we could agree that we had the correct solution when we still werenít certain what the problem was!

 

Design is the crafting of a solution to the problem that was defined during analysis. I divide design into three dimensions: architectural, application and class design. Each of these dimensions has its own role in the overall life cycle, its own set of products and its own interactions with the other dimensions. Parts of each can proceed in parallel with the others.

 

Architectural design has experienced a renewal of interest. Shaw and Garlan[9] have provided basic approaches to defining architectures while Buschmann et al[2] have provided reports of successful architectures and captured them as patterns that can be reproduced. The skeleton of the system is described as a set of components and the interactions between them. One of the process failures that I have experienced is an inappropriate, and inconsistent, division between architectural and application design. This resulted in an inappropriate level of detail in the architectural components. The architectural model incorporated the detailed mechanisms used to communicate between the components. The result was low-level decisions made by people with only a high-level view of the system. There is, and should be, a continuity between the architectural and application design dimensions; however, there can be too much overlap, or not enough.

 

If the architectural design overlaps too much with the application design by becoming too detailed, important distinctions between major elements may be lost in the large number of smaller elements. One standard technique is to use a layered model to separate levels of abstraction. Booch[1] recognized this some time ago and used "packages" as a level of aggregation of related classes. Since a package may contain packages, the concept can be used to build the architecture in layers from the individual classes to complete subsystems.

 

Class design is the corner stone of the object-oriented development process. In fact, some would say that this is the only design phase needed. Classes should be self-organizing. But this is seldom reality. In one project, the first object-oriented project for this organization, the developers were organized functionally. This resulted in multiple developers creating different parts of same class. Developers competed for the same class files, redundant class attributes were defined and design was partitioned functionally.

 

Class design is an exercise in introspection. We are looking at the objects produced from our class as others will see them, from the interface. But that introspection can not happen in isolation. The behaviors given to an object are there because they are essential to the role the object plays in a specific domain or specific design pattern. Classes exist in a community and so too must class designers. There are other classes in its family (those that the class under design inherits from). It receives necessary services from its object attributes (those composed within it). Similarly the team of developers building a family of classes interact more closely with each other than they do with objects with whom they only communicate via interfaces. This sense of community results in interesting changes. One IBM vice president tells of having a choice between cubicles and private offices for a new facility. He chose private offices. Shortly thereafter his team began using object technology. Soon he noticed that every meeting room in the building was being "permanently" reserved by a development team.

 

Application design itself has at least two dimensions. The static dimension corresponds basically to the permanent relationships between classes while the dynamic dimension presents the execution time relationships among objects. Depending upon their previous experience, those new to object-oriented software development will emphasize one of these or the other but will seldom achieve the required balance. Developers, under pressure to meet impossible deadlines, will focus on class definitions usually by creating a class diagram. Object views, represented by message sequence charts and state diagrams, require more detailed investigation, and time. By not incorporating views of objects in the design process, the designer is more likely to make errors related to the cardinality of relationships and the allowable sequences of messages. Additionally, the class-centered view does not provide sufficient information for designing multi-threaded objects where cardinality is important.

 

Canonical designs for clusters of classes[3] and entire architectures[2] have been captured in design patterns. These patterns describe the integration of classes in classical combinations that optimize specific characteristics of the solution. In a column to appear soon, I will talk about the most prevalent error made with patterns: applying them where they donít apply.

 

Principle: A design is only beautiful if it solves the problem you have.

 

Class Implementation

I view the class implementation phase as having a single primary output: new classes and two primary inputs: class designs and an inventory of existing classes. A class is implemented by integrating instances of existing classes within the framework provided by the design of the new class. The design defines the interface specification as well as declarations of attributes. This phase should be viewed as containing three activities:

I group these activities together because they represent a tightly coupled "sub-iteration" that involves only the class developer. But it is important to recognize these as separate activities even if the distinction is visible to only a single person. Even at this level, the testing activity should be structured to ensure objectivity. Test cases should not be ad hoc and should be recorded for reuse later. Debugging should definitely be seen as a separate activity from testing.

 

This is the phase in which the managers are most interested and it is the phase in which they get their biggest surprise. If the project is sufficiently large to have more than one developer, this is the time when the documents that were never created are missed. A class developer needs information about the services provided by the inventory of classes to be used in the implementation of his/her class. What is used in place of accurate models and documentation? Personal communication. Without thorough modeling and design, information can only be gained one on one. The amount of time wasted far exceeds the amount "saved" earlier. This type of problem is often overcome by dedicated developers who post their home phone numbers on their web site and carry beepers so they are available when (not if) there is a need for information about some of their classes. The end result is a burned-out staff and a process that relies on overtime to meet every deadline.

 

Integrating classes developed in this environment gives new meaning to the term "iterative" development. By developing to the needs of each specific client in the system, the class developer produces interfaces that change rapidly. Changes in interfaces, as opposed to changes in implementation, always ripple out rapidly across the project forcing other developers to modify their implementations and in some cases their designs. In my experience, where formal class design is not completed prior to the implementation phase, the resulting implementation evolves and its integrity is almost always compromised. Writing software contracts during the design phase requires developers to interact and to actively seek all possible clients before publishing a class specification.

 

 

Principle: What one person writes many people can read.

 

Application Integration

This phase is made necessary by the fact that we generally develop applications incrementally. In addition to distributing system behavior among classes, the system requirements are partitioned into "increments" by identifying sets of use cases. Concurrent increments are staffed by multiple teams while sequential increments might be staffed by the same team. Many projects involve both types of increments and each type presents its own process and quality problems.

 

Concurrent increments usually involve different sets of use cases. While this makes it easy to identify the functionality assigned to a specific increment, classes often cross use case boundaries. If a class is needed by two increments being developed concurrently, the class should be explicitly assigned to one development team. Coordinating the design and implementation of these common classes can be simplified by a clear system of class ownership that assigns responsibility and the use of software contracts to define clear interfaces.

 

Sequential increments usually modify existing system functionality to support additional features for use cases that have already been implemented. However, if the project is staffed by a single team, the initial implementation of use cases will be scheduled sequentially. As the increments are integrated, modifications must often be made to the functionality provided by the previous increment. Requirements change. Our domain knowledge expands. The use of software contracts to define the interface between two increments makes the location of needed changes explicit.

 

It is at this time in a project where class owners may lose control of their classes. A team formed by selecting representatives from all the increment teams may be charged with identifying obstacles to integration and then removing them. That is, they are empowered to fix problems encountered during the integration process. This can lead to several problems. The developer repairing functionality that is faulty may unnecessarily, or even unintentionally, narrow the specification of the class. Design models may become obsolete because code changes are not propagated back into the models. Software contracts are a simple device that can easily be updated and can provide the information needed to update the design models at a later time.

 

Principle: Interfaces should be used to control the integration of increments of system functionality just as they are used with individual classes.

 

Application Testing

I took the course to certify Personal Software Process (PSP) instructors. As part of the course we wrote programs and calculated statistics on such attributes as the amount of time spent in each of the life cycle phases for each program. I was surprised to find that my initial results looked very different from that of most other students in the course. Eventually I discovered (finally read the directions) that I was supposed to count ALL time spent testing, debugging and fixing the program as time spent in the testing phase! Not exactly an iterative approach. Many project managers similarly forget to budget rework time for developers to handle problems identified during testing. They assume that the developers will be working on the next increment while the testers test and that the integration team I described in the previous section can handle any repairs. Since this is seldom true, schedules are disrupted as developers spend "unexpected" time repairing previous work. Collecting data on how developers really spend their time can provide managers with a basis for changing their scheduling algorithms.

 

As regular readers of this column know, testing in the general sense is not a single phase rather it is a set of activities that are distributed across the entire life cycle. I have talked about validating the output of every phase in the life cycle. The testing at this point in the development process is intended to exercise complete threads of functionality in the system.

 

Principle: Matching the testing process to the development process increases the efficiency and effectiveness of both processes.

Putting the Pieces Together

These basic steps appear in a number of variants depending upon the organizational structure and development philosophy being used. I used the adjectives concurrent, iterative and incremental above in describing a development process. Each of these attributes presents itsí own opportunities and problems.

 

I was once told by a manager that he did not believe in iterating. I expressed surprise and asked how he justified throwing away a system when they found a fault during testing! The fact is that virtually every project returns to some earlier phases of the life cycle late in the development process. This may be the result of errors found during testing that require re-design or code modifications or it may be because the requirements are changing.

 

The iterative approach is beneficial when it is used to support a flexible development environment. One advantage of an iterative development process is that it schedules time to take advantage of lessons learned. Problems arise when the iterative nature is institutionalized and projects are required to iterate a certain number of times or on a fixed schedule. Excessive iterations can be a sign of a poor design. A poorly modularized design that must be changed will result in "ripples of change" that propagate out to the edges of the system and then reflect back into the system much like the waves made by dropping a bar of soap in the middle of a bath tub. Not returning to a phase sufficiently far back in the process is also a problem. Version control identifiers that indicate 120 separate modifications to a class indicate an out-of-control process in which local code changes are made without iterating back over the class and system designs. The manager sees a quick fix and doesnít collect sufficient information to understand the aggregate long-term impact.

 

The most often asked question regarding an iterative project is, "How do I know I am making progress?" We [7] answered this by describing a set of metrics and an analysis technique. The technique supports the identification of trends such as a steady decline in the number of new classes being added to the system design. The defect detection rate should also approach zero for a project that is progressing toward a solution.

 

Principle: An iterative approach is not an excuse for quick, haphazard work, rather each iteration should make quantifiable progress toward a completed system.

 

Few problems that are important enough for us to be paid to solve are also sufficiently simple to be implemented in a single unit. Dividing the required functionality into "increments" limits the complexity that the development group must master at any given time. These increments must then be "stitched" together to form the completed application. The greatest risk here relates to the thoroughness of the modeling efforts. If the models that describe the applicationís behavior have not been completely developed and thoroughly tested, the inevitable changes that will be required late in the development of one increment will force changes in others.

 

If the dependencies between use cases are not thoroughly investigated prior to defining the increment, developers may find themselves needing to develop behavior that was not originally planned to be part of the increment. If the use case model has been properly structured using the uses/extends relations, as discussed above, increments can be chosen by selecting a use case and all the related use cases. If the use case model was not properly structured, the definition of increments will contribute to schedule slippage.

 

Principle: Increments should be defined as part of a comprehensive planning process that defines explicit interfaces, and thereby exposing dependencies, between the increments.

 

Most development projects will divide the staff into multiple teams in order to be working on more than one increment at a time. Concurrency can have a confounding effect on the project if the overlap in time and functionality between increments becomes too great. Our fearless manager created a schedule that had three releases of a system being worked at the same time. The completion dates for each were scheduled just a few weeks apart. The dependencies between the increments that made up each release were so great that changes in the later stages of the initial release inevitably caused much rework in the increments of the next release (and it rippled forward). The manager was mystified as dates were missed and total development time soared.

 

Principle: Concurrent increments should be chosen to minimize interactions.

 

In my experience, a concurrent, iterative and incremental development process is an exciting and highly productive environment if managed well and a nightmare otherwise. The managers that succeed here trust and respect their development staff. They value the early activities in the life cycle. They also strike a careful balance between formality and flexibility. In return they obtain a balance between predictability in the process and the ability to effectively address changing circumstances (and requirements).

Basic Principles

Managersí divinity is not limited to development phases. Often basic principles of software engineering are suspended by managers as well. Perhaps the most frequent occurrence involves the basic principle: "Adding personnel to a late project will make it later." On a recent project one team was running behind the others and was placing a scheduled release in jeopardy. As teams reached their goals, select team members were reassigned to the team that was late. Result? The team moved ever more slowly. When I mentioned the violation of the principle, the response was, "Yes, I know what it says and I agree with it, but I have to get the work done!" Letís donít and say we did!

 

So what do we do when a project is late and has a rigid deadline? Reduce functionality until you find a software volume you can deliver. Then we claim to the customer that "we met the schedule"! Letsí donít and say we did! Seriously, reducing delivered functionality is the most reasonable approach. Clients are never happy about delays but providing them a release that accurately does something useful is preferable to attempting to deliver as promised and having nothing working by the deadline.

 

Principle: Software engineering principles may be "softer" than physical principles but they force us to make some "hard" decisions.

Tools

A senior designer on a project was reluctant to use tools for designing the details of a system. He was willing to use the CASE tool to draw high-level UML diagrams, but he would design for weeks on paper before entering those details into the model. Then he complained that the tool didnít really save him any time! This delayed other developers who needed to tie into that portion of the application model and those who needed access to the specifications defined in the model. As a result there was a large amount of one-on-one communication with the senior designer, which slowed his progress even further. Eventually the tool was used only to record the "final" model from which code was generated but not as a tool to try out ideas. This reduced the benefit of the tool and it reduced the quality of the models since they were not available to be critically examined by the design community.

 

Increasingly the web is being used as a means of communicating all types of project information. This media speeds communication, ordinarily a good thing. However, the speedy communication of information that is not well thought out or perhaps has not been reviewed can lead to chaos. On one project, during the first increment, before a design meeting, the developers would all check that they had the same version of the document to be used. We often found that one version, provided at 5 p.m. the previous day, would be superceded by a bug-fix version released at 9 a.m. the next morning! Developers became lost in a series of documents that differed in small but often significant details. Time was lost as developers raised issues that had already been identified and resolved by the documentís owner. Following a series of steps to validate a document before publishing it to the project may delay its availability, but may ensure its accuracy when it is made public.

 

Principle: Tools should not determine the process, they should be chosen and used to facilitate the process.

Training

Once again our intrepid manager strikes by canceling all classes: "We donít have time for training, just use the tool/language/notation." This of course ignores the basic premise that training saves time by presenting information in a more organized fashion than the "just in time" searches for information of someone trying to learn on their own. Certainly some training can be postponed because a good education program will be addressing both long and short term goals. Course offerings should be based on when personnel will be phased onto a project and the immediacy of the need. Collecting data both immediately after a course and a few weeks later can give an organization valuable input into judging the value of the education program to the project.

 

Courses can be structured to maximize the benefit to the audience. Software Architectsí Director of Curriculum, Lee Copeland, concludes each chapter in our courses with a slide that has 4 bullets but no text. We invite students to spend a few minutes reviewing the material that they just covered and noting things that they can use immediately and to write down questions and notes about things to do after the course is over. Even courses from outside vendors, such as Software Architects, can include a structured workshop in which data, models and examples from the studentsí current work are used.

 

Principle: There should be an individually tailored training plan for each developer that is based on the projectís priorities and each developerís personal goals.

Conclusion

A good process is one that organizes the tasks required to solve a problem and optimizes the resources applied to those activities. The penalty for not following such a process is not necessarily that the problem is not solved but that it is not solved as efficiently and effectively as it could be. Along the way quality is often the casualty as managers rush to meet dates. Schedule is another victim. What may be the quickest solution to implement may cost more time later in the process.

 

As I write each column, I ask myself what I hope you will gain by reading it and what you will do differently because of it. This month, I hope that I have caused you to reflect on process. On the absurdities that result from thinking about today without a view of tomorrow. What do I hope you will do? Critically review your current processes to determine whether they effectively coordinate the work of the group of people responsible for achieving your current goals. What I hope you wonít do is write policies and procedures that just sit on the shelf to satisfy the ISO auditor but that make no real contribution to the quality of the product or the efficiency of the development effort!

References

1. Grady Booch. Object-Oriented Design with Applications, Redwood City, CA, Benjamin-Cummings, 1991.

 

2. Frank Buschmann, Regine Meunier, Hans Rohnert, Peter Sommerlad and Michael Stal. Pattern-Oriented Software Architecture: A System of Patterns, John Wiley & Sons, Chichester, England, 1996.

 

3. Eric Gamma, Richard Helm, Ralph Johnson and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading Massachusetts, 1995.

 

4. Ian Graham. Migrating to Object Technology, Workingham England, Addison-Wesley, 1995.

 

5. Ivar Jacobson, Magnus Christerson, Patrik Jonsson and Gunnar Overgaard. Object-Oriented Software Engineering, Addison-Wesley, Reading Massachusetts, 1992.

6. Melissa Major and John D. McGregor. A Qualitative Analysis of Two Requirements Capturing Techniques for Estimating The Size of Object-Oriented Software Projects, Dept. of Computer Science Technical Report 98-002, Clemson University, http://www.cs.clemson.edu, 1998.

 

7. John D. McGregor. Managing Metrics in an Iterative Environment, Object Magazine, SIGS Publications, 5(6), 1995.

 

8. Rubin Prietro-Diaz and P. Freeman. Classifying Software for Reuse, IEEE Software, January 1987, pp 6 16.

 

9. Mary Shaw and David Garlan. Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall, Upper Saddle River, NJ, 1996.