Back in the ’80s, when the Capability Maturity Model (CMM) movement was gathering steam, surveys reported that half of all software projects failed. Even today, a significant number of developers report that they have never worked on a successful software project. I’ve written about the relationship between this problem and Moore’s law in the past, but hucksters selling cure-alls don’t have time to investigate root causes.
This is evident most often in comparisons of development methodologies. Historically, corporate America applied the “Waterfall Model”, a name coined by Winston Royce. Royce identified seven critical activities in software development: systems requirements, software requirements, analysis, design, implementation, verification and operation. The seven follow a definite chain of information dependencies, suggesting the “waterfall” analogy. But Royce himself observed that no project followed that sequence. There were all kinds of feedback loops from later stages to earlier stages.
What is astonishing to me is that later practitioners removed the first and last step. This tends to support amnesia about the evolution of the institutions that software developers support. Prior to World War II, most businesses were dominated by “tribal knowledge” of their operations. Goals were set from on high, but implementation was organic and often opaque. That changed in the 50s: confronted with the daunting logistics of WW II, the armed services formed a logistical planning office and trained practitioners. It was these men, including Robert McNamara, who went out and transformed the practices of corporate management in the 50s.
Thus the importance of the “systems requirements” stage of the waterfall process. Information systems were being injected into organizations whose theory of operation was vastly different from actual performance. Initial users of structured analysis, for example, discovered that many significant decisions were made by white-collar workers loitering around the water cooler, bypassing the hierarchical systems of reporting required by their organizational structure. Deploying an information system that enforced formal chains of authorization often disrupted that decision making, and organizations suffered as a result.
The common charge leveled against the Waterfall model is that the requirements are never right, and so attempts to build a fully integrated solution are doomed to fail. This has led to models, such as Agile and Lean software development, that promote continuous delivery of solutions to customers. But remember what supports that delivery: ubiquitous networking and standard software component models (including J2EE, Spring, SQL databases, and .NET) that allow pieces to be replaced dynamically while systems are operating. Those technologies didn’t exist when the waterfall model was proposed. And when they did arrive, proponents of the model immediately suggested a shift to “rapid prototyping” activities that would place working code before key end users as early in the project as possible. The expectation was that the politically fraught early stages of requirements discovery could then be avoided.
Actually, this might be possible at this point in time. Information systems provide instrumentation of operations to the degree that SAP now advertises the idea that they allow businesses to manifest a “soul.” Web service architectures allow modified applications to be presented to a trial population while the old application continues to run. Technology may now be capable of supporting continuous evolution of software solutions.
But removing the systems requirements stage of the process leaves this problem: where do requirements come from? Watching the manipulation of statistics by our presidential candidates, only the naive would believe that the same doesn’t occur in a corporate setting. Agile and Lean models that promise immediate satisfaction weaken the need for oversight of feature specification, perhaps opening the door to manipulation of application development in support of personal ambitions among the management team.
Control of such manipulation will be possible only when integrated design is possible – where the purpose of implementing a feature is shown in the context of a proposed operation. Currently that kind of design is not practiced – although Diagrammatic Programming has demonstrated its possibility.
In our current context, however, the wisdom of the CMM is still to be heeded. In a comment to an author pushing Agile over Waterfall development, I summarized the CMM’s five stages as follows:
- Define the boundary around your software process, and monitor and control the flow of artifacts across that boundary.
- Require that each developer describe his or her work practices.
- Get the developers to harmonize their practices.
- Create a database to capture the correlations between effort (3) and outcomes (1).
- Apply the experience captured in (4) to improve outcomes.
This is just good, sound, evidence-based management, and the author thanked me for explaining it to him. He had always thought of the CMM as a waterfall enforcement tool, rather than as management process.
And for those arguing “Waterfall” vs. “Agile” vs. “Lean”: if you don’t have CMM-based data to back up your claims, you should be clear that you’re really involved in shaking up organizational culture.