Exploring Solutions Space

Perhaps the most humbling aspect of software development is the inflexibility of the machines that we control. They do exactly what we tell them to do, and when that results in disaster, there’s no shifting of the blame. On the other hand, computers do not become conditioned to your failure – they’re like indestructible puppies, always happy to try again.

That computers don’t care what we tell them to do is symptomatic of the fact that the measure of the success of our programs is in the non-digital world. Even when the engineer works end-to-end in the digital realm, such as in digital networking, the rewards come from subscriptions paid by customers that consume the content delivered by the network. In the current tech market, that is sometimes ignored. I keep on reminding engineers earning six-figure salaries that if they don’t concern themselves with the survival of the middle class, at some point there won’t be any subscribers to their internet solutions.

So we come back again to an understanding of programming that involves the complex interaction of many system elements – computers, machines, people and all the other forms of life that have melded into a strained global ecosystem where the competition for energy has been channeled forcefully into the generation of ideas.

These ideas are expressed in many ways – not just through natural and computer languages, but also in the shape of a coffee cup and the power plant that burns coal to produce electricity. The question facing us as programmers is how best to represent the interaction of those components. Obviously, we cannot adopt only a single perspective. All languages encode information most efficiently for processors that have been prepared to interpret them. In the case of a computer ship, that preparation is in the design of the compilers and digital circuitry. For people, the preparation is a childhood and education in a culture that conditions others to respond to our utterances.

This context must give us cause to wonder how we can negotiate the solution to problems. This is the core motivation for our search for knowledge – to inform our capacity to imagine a reality that does not yet exist, a reality that manifests our projection of personality. We all use different languages to express our desires, everything from the discreetly worn perfume to the bombastic demands of the megalomaniac. We use different means of expressing our expectations, from the tender caress to the legal writ. None of these forms of expression has greater or lesser legitimacy.

In my previous post in this series, I introduced the idea of a program as an operational hypothesis that is refined through cause-and-effect analysis. Cause-and-effect denotes a relationship. This can be a relationship between objects whose behavior can be characterized by the brute laws of physics (such as baseballs and computer chips) or organic systems (such as people and companies) that will ignore their instructions when confronted with destruction. What is universally true about these relationships is that they involve identifiably distinct entities that exchange matter and energy. The purpose of that exchange, in systems that generate value, is to provide resources that can be transformed by the receiver to solve yet another problem. In the network of cause-and-effect, there is no beginning nor end, only a system that is either sustainable or unsustainable.

The single shared characteristic of all written languages is that they are very poor representations of networks of exchange. Languages are processed sequentially, while networks manifest simultaneity. To apprehend the connectedness of events requires a graphical notation that expresses the pattern of cause-and-effect. Given the diversity of languages used to describe the behavior of system elements, we are left with a lowest-common-denominator semantics for the elements of the notation: events occur in which processors receive resources, transform them according to some method, and emit products. The reliable delivery of resources and products requires some sort of connection mechanism, which may be as simple as the dinner table, or as complex as the telecommunications system.

This is the core realization manifested in Karl Balke’s Diagrammatic Programming notation. Generalizing “resources” and “products” with “values”, the notation specifies cause-and-effect as a network of events. In each event, a processor performs a service to transform values, which are preserved and/or transferred to be available for execution of other services by the same or another processor. The services are represented as boxes that accept a specification for the action performed by the processor in terms suitable for prediction of its interaction with the values. This may be chemical reaction formulae, spoken dialog in a play, or statements in a computer programming language. The exchange of values is characterized by connections that must accommodate all possible values associated with an event. The connections are described by the values they must accommodate, and represented in the cause-and-effect network by labelled lines that link the services.

While Diagrammatic Programming notation does not require sequential execution, specification of a pattern of cause-and-effect leads inevitably to event sequencing. This does require the elimination of certain constructs from the action description. For example, DP notation contains elements that specify actions such as “wait here for a value to appear” and “analyze a value to determine what service to perform next.” When the program is converted to an executable form, processor-specific instructions are generated from the network layout.

In a properly disciplined design process, the end result is a specification of an operational hypothesis that allows the stakeholders in the implementation to negotiate their expectations. They may not be able to understand what is happening on the other side of a connection, but they can define their expectations regarding the values received by their processors. It is in through that negotiation that the space of solutions is narrowed to a form that can be subjected to engineering design.

As has become obvious in this discussion, in the context of DP analysis simple human concerns become abstracted. The technology of Diagrammatic Programming must be concerned not only with the variant perspectives of participants in the design process, but also with the perceptual capabilities of different processors, where the value “Click Here” is encoded as Unicode bytes in computer memory but appears to the user as letters on a computer display. This richness manifests in terminology and notation that requires careful study and disciplined application to ensure that a program can be elaborated into executable form.

Full implementation of the Diagrammatic Programming method was my father’s life-work, a life-work conducted by those concerned that systems serve the people that depend upon them, rather than being used for the propagation of exploitative egos. This introduction is offered in the hope that of those committed to the production of value, some may be motivated to understand and carry that work on to its completion. It is simply far too much for me to accomplish alone.

In the most detailed comparison study of its use, the following benefits were revealed: rather than spending half of my development schedule in debugging, I spent one tenth. When faced with refactoring of a module to accommodate changed requirements, the effort was simply to select the services and connections to be encapsulated, and cut-and-paste them to a new drawing. While the representation of cause-and-effect may seem a burdensome abstraction, in fact it supports methods of design and analysis that are extremely difficult to emulate on instructions specified as text.

The Imitation Game

I’ve been known to get emotional at the movies, but it’s been since Alien that I’ve been as broken down emotionally as I was today by The Imitation Game.

Alan Turing not only made fundamental contributions to the mathematical foundations of modern computing, he also formulated an inspirational goal for machine intelligence. Known as the Turing Test, it proposes that if a human communicating through a neutral interface (such as a teletype) can’t distinguish the responses of a human from those of a machine, then the intelligence of the machine must be considered to be comparable to a human’s.

My father, Karl Balke, was one of the men that plowed the field cleared by Turing and others. As he described the think-tank at Los Alamos, the researchers brought every intellectual discipline to bear on the problem of transforming logic gates (capable only of representing “on” and “off” with their output) into systems that could perform complex computations. Their research was not limited to machine design. Languages had to be developed that would allow human goals to be expressed as programs that the machines could execute.

In the early stages of language development, competing proposals shifted the burden of intelligibility between human and machine. The programming languages that we have today reflect the conclusion of that research: most computer programs are simply algorithms for transforming data. The machine has absolutely no comprehension of the purpose of the program, and so cannot adapt the program when changes in social or economic conditions undermine the assumptions that held at the time of its writing. It is left to the “maintenance” programmer to accomplish that adaptation. (Today, most in the field recognize that maintenance is far more difficult than writing the original program, mostly because very few organizations document the original assumptions.)

I believe that my father’s intellectual struggle left him deeply sensitive to the human implications of computing. As I child, I grew up listening to case studies of business operations that came to a grinding halt because the forms generated by the computers were re-organized to suit the capabilities of relatively primitive print drivers, rather than maintaining the layout familiar to the employees. People just couldn’t find the information that they needed. Worse were the stories of the destruction of sophisticated planning systems implemented by human methods. When automation was mandated, the manual procedures were simply too difficult to describe using the programming languages of the day. The only path to automation was to discard the manual methods, which could cripple production.

Turing confronted this contradiction in the ultimate degree after building a machine to break the Nazi’s method for secret communications, known as “Enigma.” If the achievement was to have sustained utility, the Allies’ knowledge of Axis military planning had to be limited: otherwise the Nazis would realize that Enigma had been defeated, and develop a better encryption method. As a consequence, most Allied warriors and civilians facing Nazi assault did so without benefit of the intelligence known to Turing and his team.

While the point is not made obvious, the movie illuminates the personal history that conditioned Turing for his accomplishments. Isolated psychologically from his peers – both by the social stigma of his homosexuality and by what today might be diagnosed as autism or Asperger’s syndrome – Turing was confronted from an early age by the question of what it meant to be human. Was it only the degree of his intelligence that distinguished him from his peers? Or was his intelligence tied to deviant – if not monstrous – behavior? My belief is that these questions were critical motivations for Turing’s drive to understand and simulate intelligence.

That parallels the experience of my father, burdened by his own psychological demons, but also critically concerned that artificial intelligence answer to the authentic needs of the people it empowered. That belief led him to devote most of his life to creation of a universal graphical notation for representation of the operation of systems of:

  • arbitrary collections of people and machines,
  • following programs written in diverse languages.

That technology, now known as Diagrammatic Programming, was recognized by some as the only provably sufficient method for systems analysis. Unfortunately, by the time it was refined through application, the economics of the software industry had shifted to entertainment and the world-wide web. Engineering was often an after-thought: what was important was to get an application to the market, structured so that it held users captive to future improvements. Raw energy and the volume of code generated became the industry’s management metrics.

The personality traits that allowed Turing to build his thinking machines ultimately cost him the opportunity to explore their application. He was exposed as a “deviant” and drummed out of academia. Accepting a course of chemical castration that would allow him to continue his work privately, he committed suicide after a year, perhaps because he discovered that the side-effects made work impossible.

My father was afflicted by childhood polio, and has been isolated for years from his peer group by degenerative neuropathy in his legs.

While my empathy for both of these brilliant men was a trigger for the sadness that overwhelmed me as the final credits rolled, the stories touch a deeper chord. Both were denied the just fruits of their labor by preconceived notions of what it means to be human: Turing because he thought and behaved differently, my father because he attempted the difficult task of breaking down the tribal barriers defined by the languages that separate us.

So what lesson am I to draw from that, as I struggle to prove the truth of the power that comes from a surrender to the purposes of divine love? Is social rejection inevitable when we surrender what others consider to be “humanity”?

Is that not what condemned Jesus of Nazareth? His renunciation of violence and self-seeking? His refusal to fear death?