Exploring Solutions Space

Perhaps the most humbling aspect of software development is the inflexibility of the machines that we control. They do exactly what we tell them to do, and when that results in disaster, there’s no shifting of the blame. On the other hand, computers do not become conditioned to your failure – they’re like indestructible puppies, always happy to try again.

That computers don’t care what we tell them to do is symptomatic of the fact that the measure of the success of our programs is in the non-digital world. Even when the engineer works end-to-end in the digital realm, such as in digital networking, the rewards come from subscriptions paid by customers that consume the content delivered by the network. In the current tech market, that is sometimes ignored. I keep on reminding engineers earning six-figure salaries that if they don’t concern themselves with the survival of the middle class, at some point there won’t be any subscribers to their internet solutions.

So we come back again to an understanding of programming that involves the complex interaction of many system elements – computers, machines, people and all the other forms of life that have melded into a strained global ecosystem where the competition for energy has been channeled forcefully into the generation of ideas.

These ideas are expressed in many ways – not just through natural and computer languages, but also in the shape of a coffee cup and the power plant that burns coal to produce electricity. The question facing us as programmers is how best to represent the interaction of those components. Obviously, we cannot adopt only a single perspective. All languages encode information most efficiently for processors that have been prepared to interpret them. In the case of a computer ship, that preparation is in the design of the compilers and digital circuitry. For people, the preparation is a childhood and education in a culture that conditions others to respond to our utterances.

This context must give us cause to wonder how we can negotiate the solution to problems. This is the core motivation for our search for knowledge – to inform our capacity to imagine a reality that does not yet exist, a reality that manifests our projection of personality. We all use different languages to express our desires, everything from the discreetly worn perfume to the bombastic demands of the megalomaniac. We use different means of expressing our expectations, from the tender caress to the legal writ. None of these forms of expression has greater or lesser legitimacy.

In my previous post in this series, I introduced the idea of a program as an operational hypothesis that is refined through cause-and-effect analysis. Cause-and-effect denotes a relationship. This can be a relationship between objects whose behavior can be characterized by the brute laws of physics (such as baseballs and computer chips) or organic systems (such as people and companies) that will ignore their instructions when confronted with destruction. What is universally true about these relationships is that they involve identifiably distinct entities that exchange matter and energy. The purpose of that exchange, in systems that generate value, is to provide resources that can be transformed by the receiver to solve yet another problem. In the network of cause-and-effect, there is no beginning nor end, only a system that is either sustainable or unsustainable.

The single shared characteristic of all written languages is that they are very poor representations of networks of exchange. Languages are processed sequentially, while networks manifest simultaneity. To apprehend the connectedness of events requires a graphical notation that expresses the pattern of cause-and-effect. Given the diversity of languages used to describe the behavior of system elements, we are left with a lowest-common-denominator semantics for the elements of the notation: events occur in which processors receive resources, transform them according to some method, and emit products. The reliable delivery of resources and products requires some sort of connection mechanism, which may be as simple as the dinner table, or as complex as the telecommunications system.

This is the core realization manifested in Karl Balke’s Diagrammatic Programming notation. Generalizing “resources” and “products” with “values”, the notation specifies cause-and-effect as a network of events. In each event, a processor performs a service to transform values, which are preserved and/or transferred to be available for execution of other services by the same or another processor. The services are represented as boxes that accept a specification for the action performed by the processor in terms suitable for prediction of its interaction with the values. This may be chemical reaction formulae, spoken dialog in a play, or statements in a computer programming language. The exchange of values is characterized by connections that must accommodate all possible values associated with an event. The connections are described by the values they must accommodate, and represented in the cause-and-effect network by labelled lines that link the services.

While Diagrammatic Programming notation does not require sequential execution, specification of a pattern of cause-and-effect leads inevitably to event sequencing. This does require the elimination of certain constructs from the action description. For example, DP notation contains elements that specify actions such as “wait here for a value to appear” and “analyze a value to determine what service to perform next.” When the program is converted to an executable form, processor-specific instructions are generated from the network layout.

In a properly disciplined design process, the end result is a specification of an operational hypothesis that allows the stakeholders in the implementation to negotiate their expectations. They may not be able to understand what is happening on the other side of a connection, but they can define their expectations regarding the values received by their processors. It is in through that negotiation that the space of solutions is narrowed to a form that can be subjected to engineering design.

As has become obvious in this discussion, in the context of DP analysis simple human concerns become abstracted. The technology of Diagrammatic Programming must be concerned not only with the variant perspectives of participants in the design process, but also with the perceptual capabilities of different processors, where the value “Click Here” is encoded as Unicode bytes in computer memory but appears to the user as letters on a computer display. This richness manifests in terminology and notation that requires careful study and disciplined application to ensure that a program can be elaborated into executable form.

Full implementation of the Diagrammatic Programming method was my father’s life-work, a life-work conducted by those concerned that systems serve the people that depend upon them, rather than being used for the propagation of exploitative egos. This introduction is offered in the hope that of those committed to the production of value, some may be motivated to understand and carry that work on to its completion. It is simply far too much for me to accomplish alone.

In the most detailed comparison study of its use, the following benefits were revealed: rather than spending half of my development schedule in debugging, I spent one tenth. When faced with refactoring of a module to accommodate changed requirements, the effort was simply to select the services and connections to be encapsulated, and cut-and-paste them to a new drawing. While the representation of cause-and-effect may seem a burdensome abstraction, in fact it supports methods of design and analysis that are extremely difficult to emulate on instructions specified as text.

Design by Discipline

When I received my Ph.D. in Particle Physics in 1987, I was qualified as among the wonkiest elite in science. If I had been concerned with proving that I was smart, I might have stayed in physics, but the expectations for practical applications of fundamental physics had eroded greatly after my freshman year. I wanted the larger world to benefit from the work that I did, so I took a job at a national laboratory. After a brief post-doc in fundamental physics, I moved over to environmental science. Throughout, the growing importance of computerized control and simulation meant that I enjoyed a distinct competitive advantage over my peers, as I had learned to program from one of the foremost practitioners in his generation – my father. When I became a full-time software developer, my background in physics allowed me to collaborate with engineers, to the extent that I would be brought in on engineering meetings when my peers were unavailable.

Now this may seem like just bragging, but the point is that my career has been a dynamically evolving mash-up of science, engineering and programming. My experience was filtered through a practice of systems analysis that led me to examine and control the way that those disciplines interact. So when I think about science, I don’t think about it as “what scientists do.” I do consider myself a scientist, but I do engineering and programming as well, and I perceive the three disciplines as very different activities.

I took a course on philosophy of science as an undergraduate, and I won’t drag you, dear reader, through all the definitions that have been offered. Most of them hold that Frances Bacon’s articulation of the scientific process was a magic portal for the human intellect, as though practical efficacy and the rational ordering of knowledge had not been recognized virtues among the ancients. This leads many philosophers of science to be overly concerned with truth, when what is really of interest to us as people is what has yet to be true.

The power of science is in allowing us to pierce the shadowy veil of possibility. In biology, understanding of the variety of living things and their mutual dependencies gives us the power to sustain agriculture, breed robust animals, and improve our health. Chemistry empowers us to predict the stability and properties of new substances. And physics probes the fundamental mechanisms that determine both the stability of the world around us and our ability to manipulate it.

So science provides us with pure knowledge, unconstrained by our desires or intentions. It therefore tends to attract people that are driven by curiosity. That may sound like a trivial thing, but to find order in the chaotic milieu of nature is a source of great hope. Calendars that predict the seasons allowed agricultural societies to improve their harvests and so avoid famine. The germ theory of disease motivated doctors to wash their hands, transforming hospitals from centers of disease incubation to places of healing. Scientific curiosity – to ask neurotically “why?” – is the source of great power over the world.

That power is visible in the manufactured reality all around us: the houses, roads, dams and microchips. None of these things would have existed in the natural world. The artifacts exist only because people have a purpose for them. That purpose may be as simple as cooking dinner for our children, or as grand as ensuring that the world’s knowledge is available through the internet to any person, anywhere, any time. Which of our goals are realized is largely a matter of economics: are enough people invested in the outcome that they are willing to pay to see it accomplished? We don’t have to have a kitchen in every home, but few of us can afford to go out to dinner every night, so we pay for a kitchen. The cost and delay of moving information via mail drove the growth of the internet, at an expense that I would imagine (I can’t find numbers online) has run into trillions of dollars.

Now when people invest a substantial sum of money, they want some assurance that they’ll get what they’re paying for. Appreciating that gold does not tarnish, the sultan seeking to protect the beauty of his marble dome does not want to be told, “All natural gold originates in supernovae.” Or, worse, “If we smash heavy elements together in an accelerator, we can produce ten gold atoms a day.” Those kinds of answers are acceptable in scientific circles, but they are not acceptable in the engineering world. In the engineering world, when somebody comes to you with money and a problem, your job is to design an implementation that will realize their goal.

Since we’re a species of Jones-chasers, most of the time the engineer’s job is fairly simple. People come wanting something that they’ve seen, and the challenge is to understand how it was done before and adapting the design to local conditions. But every now and then somebody comes in to ask for something completely novel. They want to build an elevator to space, for example, or create a light source that doesn’t produce soot. The engineer has no way of knowing whether such things are possible, except by reference to science.

It is into the gap between the formless knowledge of science and the concrete specifications of engineering that programming falls. Considering the light bulb, the scientists know that heated objects glow, but also burn. Applying an electric voltage to a poor conductor causes it to heat as current flows through it. The filament will burn when exposed to oxygen, so we need to isolate it from air. Using an opaque material as the barrier will also trap the generated light. However, some solids (such as glass) are transparent, and air permeates slowly through them.

The illustration is a cause-and-effect analysis. It looks at the desirable and undesirable outcomes of various scientific effects, attempting to eliminate the latter while preserving the former. The cause-and-effect analysis leads to an operational hypothesis: if we embed a wire in a glass bulb and apply a voltage, the wire will heat and emit light. This is not an engineering specification, because we don’t know how much the light bulb will cost, or how much light it will emit. But it also isn’t science, because the operational hypothesis is not known to be true. There may be no filament material that will glow brightly enough, or the required voltage may be so high that the source sparks down, or the glass may melt. But without the operational hypothesis, which I have called a “program,” engineering cannot begin.

We examined the challenge of software engineering in the first post in this series, focusing on the rapid development in the field and the difficulty in translating customer needs into terms that can be interpreted by digital processors. Today, we have arrived at a more subtle point: the algorithms written in our programming languages process information to produce information. The inputs for this process arise from nature and humans and increasingly other machines. Those inputs change constantly. Therefore very few programs (except maybe that for space probes) are deployed into predictable environments. That includes the hardware that runs the program – it may be Atom or Intel or AMD, and so the performance of software is not known a priori. For all of these reasons, every piece of software is thus simply an operational hypothesis. It is a program, not a product.