Domain Domination

As a person with broad intellectual interests, I might be an anachronism. One of the problems of free market economics is that it exploits our strengths and exacerbates our weaknesses. People that seek a healthy balance don’t fit naturally in the system. Fortunately, I took up my career as a software developer during a sweet spot of sorts – enough infrastructure had been established that we don’t have to worry about the details of how a computer manages memory and peripherals or does arithmetic on different data types, but the industry had not yet become a self-sustaining economic system driven by the generation and sharing of digital data. As a generalist, then, I was valuable as a translator between the digital realm and the “normal” world.

I was struck by the magic of the digital reality. My father enjoyed sharing stories of how he could make programs break in the early days by abusing their input devices, but by the time I had come on the scene, the electrical engineers had succeeded in creating a world in which the computer never seemed to get tired, made your mess disappear without fuss, and always did exactly what you asked. Knowing men, I wasn’t surprised that many were seduced completely by that fantasy. In my case, I was seduced by the fact that if you knew a little about software, you could get any productive person to talk to you in the hopes that they could partner to parlay their expertise into dot-com fortune.

In translating those conversations into software, I was fortunate to have object-oriented development methods to exercise.  It allows me to create software abstractions that correspond well with the goals of my users. In engineering applications, concerned with the operation of actual machinery, object-oriented methods are a particularly strong fit.

That’s not so much the case in the software industry today. Companies such as Google and Facebook have managed to compile huge stores of data, and aspire to correlate that information with economic activity. There’s really no definite theory behind those explorations, so we’ve seen the rise of languages that describe efficiently algorithms that filter, transform and correlate random pieces of data.

The recruiting challenge facing engineering companies is lampooned in a GE ad in which a new hire finds himself competing for attention against the developer of a mobile app that puts fruit hats on pictures of your pet. GE is competing against nascent monopolies (Google and Facebook again the exemplars) that throw money at developers just to keep them out of the hands of their competitors. I faced the same challenge when seeking to grow my current team.

But when exploring the technologies (Haskell, Clojure, and others) used by Google and others for analysis of large data stores, what struck me most was how terribly dry they are. There’s no sense of connection to people and the choices that they make. To me that takes a lot of fun out of my practice.

This has been expressed in my working through of the examples in Troelson’s Pro C# and the .NET 4.5 Framework. Confronted with examples with names like “ExtractAppDomainHostingThread” and “MyAsyncCallbackMethod”, I found myself figuratively tearing out my hair. Yes, these names are self-documenting, in the sense that they forecast accurately what we find in the code, but they aren’t even entertaining much less actually fun.

When Troelson begins exploring how .NET supports an application that has to perform many separate tasks in parallel, he introduces a class called Printer that writes a number to the screen and then waits a short time before writing the next number. By running many Printers in parallel, we can see clearly the unpredictability of the results in the screen output.

Of course I am offended by this whole concept. No Printer in the world ever behaved like this. So, given this class that does something meaningless while wasting time, I renamed it “Useless.” Rather than invoking “PrintNumbers”, I tell my Useless class to “WasteTime.” As methods for corralling wayward tasks are advanced, I further the metaphor with methods such as “WanderIdly” and “LanguishInAQueue.”

My son and I meet most Saturdays for lunch at the Fresh Brothers in the Westlake Village Promenade. When he interrupted my exercises, I talked him through these examples, and he burst out laughing. Now that’s success.

So what’s the developer trapped in the digital world-view to do? My suggestion would be a return to assembly coding. At Los Alamos in the ’50s, my father picked up the habit of trying to read the consonant-rich listings. He would become mightily amused as he punctuated them with lip-smacks and shrill sirens, decorations evolved in the secret society of machine developers trapped on the isolated buttes of New Mexico.

Artificers of Intelligence

The chess program on a cell phone can beat all but the best human players in the world. It does this by considering every possible move on the board, looking forward perhaps seven to ten turns. Using the balance of pieces on the board, the algorithm works back to the move most likely to yield an advantage as the game develops.

These algorithms are hugely expensive in energetic terms. The human brain solves the same problem in a far more efficient fashion. A human chess player understands that there are certain combinations of pieces that provide leverage over the opposing forces. As opportunities arise to create those configurations, they focus their attention on those pieces, largely ignoring the rest of the board. That means that the human player considers only a small sub-set of the moves considered by the average chess program.

This advantage is the target of recent research using computerized neural networks. A neural net is inspired by the structure of the human brain itself. Each digital “node” is a type of artificial neuron. The nodes are arranged in ranks. Each node receives input values from the nodes in the prior rank, and generates a signal to be processed by the neurons in the next rank. This models the web of dendrites used by a human neuron to receive stimulus and the axon by which it transmits the signal to the dendrites of other neurons.

In the case of the human neuron, activation of the synapse (the gap separating axon and dendrite) causes it to become more sensitive, particularly when that action is reinforced by positive signals from the rest of the body (increased energy and nutrients). In the computerized neural network, a mathematical formula is used to calculate the strength of the signal produced by a neuron. The effect of the received signals and the strength of the generated signal is controlled by parameters – often simple scaling factors – that can be adjusted, node by node, to tune the behavior of the network.

To train an artificial neural network, we proceed much as we would with a human child. We provide them experiences (a configuration of pieces on a chess board) and give feedback (a type of grade on the test) that evaluates their moves. For human players, that experience often comes from actual matches. To train a computerized neural network, many researchers draw upon the large databases of game play that have been established for study by human players. The encoding of the piece positions is provided to the network as “sensory input” (much as our eyes do when looking at a chess board), and the output is the new configuration. Using an evaluative function to determine the strength of each final position, the training program adjusts the scaling factors until the desired result (“winning the game”) is achieved as “often as possible.”

In the final configuration, the computerized neural network is far more efficient than its brute-force predecessors. But consider what is going on here: the energetic expenditure has merely been front-loaded. It took an enormous amount of energy to create the database used for the training, and to conduct the training itself. Furthermore, the training is not done just once, because a neural network that is too large does not stabilize its output (too much flexibility) and a network that is too small cannot span the possibilities of the game. Finding a successful network design is a process of trial-and-error controlled by human researchers, and until they get the design right, the training must be performed again and again on each iteration of the network.

But note that human chess experts engage in similar strategies. Sitting down at a chess board, the starting position allows an enormous number of possibilities, too many to contemplate. What happens is that the first few moves determine an “opening” that may run to ten or twenty moves performed almost by rote. These openings are studied and committed to memory by master players. They represent the aggregate wisdom of centuries of chess players about how to avoid crashing and burning early in the game. At the end of the game, when the pieces are whittled down, players employ “closings”, techniques for achieving checkmate that can be committed to memory. It is only in the middle of the game, in the actual cut-and-thrust of conflict, that much creative thinking is done.

So which of the “brains” is more intelligent: the computer network or the human brain? When my son was building a chess program in high school, I was impressed by the board and piece designs that he put together. They made playing the game more engaging. I began thinking that a freemium play strategy would be to add animations to the pieces. But what about if the players were able to change the rules themselves? For example, allow the queen to move as a knight for one turn. Or modify the game board itself: select a square and modify it to allow passage only on the diagonal or in one direction. I would assert that a human player would find this to be a real creative stimulus, while the neural network would just collapse in confusion. The training set didn’t include configurations with three knights on the board, or restrictions on moves.

This was the point I made when considering the mental faculties out at Intelligence is not determined by our ability to succeed under systems of fixed rules. Intelligence is the measure of our ability to adapt our behaviors when the rules change. In the case of the human mind, we recruit additional neurons to the problem. This is evident in the brains of blind people, in which the neurons of the visual cortex are repurposed for processing of other sensory input (touch, hearing and smell), allowing the blind to become far more “intelligent” decision makers when outcomes are determined by those qualities of our experience.

This discussion, involving a game without much concrete consequence, appears to be largely academic. But there have been situations in which this limitation of artificial intelligence have been enormously destructive. It turns out that the targeting systems of drones employ neural networks trained against radar and visual observations of friendly and enemy aircraft. Those drones have misidentified friendly aircraft in live-fire incidents, firing their air-to-air missile and destroying the target.

So proclamations by some that we are on the cusp of true artificial intelligence are, in my mind, a little overblown. What we are near is a shift in the power allocated to machines that operate with a fixed set of rules, away from biological mechanisms that adapt their thinking when they encounter unexpected conditions. That balance must be carefully managed, lest we find ourselves without the power to adapt.

Staying Cool with R

Before returning to the control industry in 2008, I was employed in business systems development. My employer was hot to get in on the off-shore gambling business, but was kind enough to ask me what I was interested in. I offered my concern that people were overwhelmed with the demands imposed by 24/7 communications, to the point that their ability to actually immerse themselves in the experience of the moment was degrading. I thought that a system that guided them through reflection and looked for correlations between mood and experience might be the basis for helping them find people and places that would allow them to express their talents and find joy.

His reaction was to try to stake me at the gambling tables in Reno.

But he did recognize that I was motivated by a deep caring for people. That’s lead me into other directions in the interim. I’ve been trying to moderate the harsh tone in the dialog between scientists and mystics. I’ve accomplished about as much as I can – the resolution I have to offer is laid out in several places. I just need to let the target audience find the message.

So I’ve turned back to that vision. A lot has changed in the interim, most importantly being the unification of the Windows platform. This means that I can try to demonstrate the ideas in a single technology space. There’s only so many minutes in the day, after all.

I began with a review of statistical analysis. I’ve got a pair of books, bought back when I was a member of the Science Book of the Month club, on analysis of messy data. That provided me with the mathematical background to make sense of Robert Kabacoff’s R in Action. However it’s one thing to do analysis on the toy data sets that come with the R libraries. Real data always has its own character, and requires a great deal of curation. It would be nice to have some to play with.

One approach would be to begin digging into Bayesian language net theory and researching psychological assessment engines in preparation for building a prototype that I could use on my own. But I already have a pretty evolved sense of myself – I don’t think that I’d really push the engine. And I would really like to play with the Universal applications framework that Microsoft has developed. On top of that, the availability of an IoT (internet of things) build of Windows 10 for Raspberry Pi means that I can build a sensor network without having to learn another development environment.

So that plan is to deploy temperature and humidity sensors in my apartment. It’s a three-floor layout with a loft on the top floor. The middle floor contains a combination living/dining area and the kitchen. Both the loft and the kitchen have large sliders facing west, which means that they bake in the afternoon. On the bottom floor, the landing opens on one side to the garage and one the other side to my bedroom. The bedroom faces east behind two large canopies, although the willow tree allows a fair amount of light through. There’s a single thermostat on the middle floor. So it’s an interesting environment, with complicated characteristics.

While thermal balance also involves the state of windows, doors and appliances, I think that I can get a pretty good sense of those other elements by monitoring the air that flows around them. Being a hot yoga masochist, I’m also curious regarding the effect of humidity.

So I’ve got a Raspberry Pi on the way, and have installed Microsoft’s Visual Studio Community on my Surface Pro. Combination temperature and humidity sensors cost about ten dollars. While real-time data would be nice, I don’t think that for the purposes of my study I’ll need to link to the Wi-Fi to push the data out to a cloud server. I can use my laptop to upload it when I get home each day. And there’s some work to do in R: the time series analysis includes seasonal variations on annual trends, and I certainly expect my measurements to show that, but there will also be important diurnal variations. Finally, the activation of temperature control appliances (air conditioner and furnace) needs to be correlated with the data. I don’t want to invest in a Nest thermostat, or figure out how to get access to the data, so I’m going to see if I can use Cortana to post notes to my calendar (“Cortana – I just set the air conditioning to 74 degrees”).

Obviously there’s a lot to learn here. But no single piece is overwhelming until I get to the data analysis. Just cobbling together of small pieces. Should be fun! And if I can figure out how to manage my windows and doors and appliances to reduce my energy expenditures – well, that would be an interesting accomplishment.

Exploring Solutions Space

Perhaps the most humbling aspect of software development is the inflexibility of the machines that we control. They do exactly what we tell them to do, and when that results in disaster, there’s no shifting of the blame. On the other hand, computers do not become conditioned to your failure – they’re like indestructible puppies, always happy to try again.

That computers don’t care what we tell them to do is symptomatic of the fact that the measure of the success of our programs is in the non-digital world. Even when the engineer works end-to-end in the digital realm, such as in digital networking, the rewards come from subscriptions paid by customers that consume the content delivered by the network. In the current tech market, that is sometimes ignored. I keep on reminding engineers earning six-figure salaries that if they don’t concern themselves with the survival of the middle class, at some point there won’t be any subscribers to their internet solutions.

So we come back again to an understanding of programming that involves the complex interaction of many system elements – computers, machines, people and all the other forms of life that have melded into a strained global ecosystem where the competition for energy has been channeled forcefully into the generation of ideas.

These ideas are expressed in many ways – not just through natural and computer languages, but also in the shape of a coffee cup and the power plant that burns coal to produce electricity. The question facing us as programmers is how best to represent the interaction of those components. Obviously, we cannot adopt only a single perspective. All languages encode information most efficiently for processors that have been prepared to interpret them. In the case of a computer ship, that preparation is in the design of the compilers and digital circuitry. For people, the preparation is a childhood and education in a culture that conditions others to respond to our utterances.

This context must give us cause to wonder how we can negotiate the solution to problems. This is the core motivation for our search for knowledge – to inform our capacity to imagine a reality that does not yet exist, a reality that manifests our projection of personality. We all use different languages to express our desires, everything from the discreetly worn perfume to the bombastic demands of the megalomaniac. We use different means of expressing our expectations, from the tender caress to the legal writ. None of these forms of expression has greater or lesser legitimacy.

In my previous post in this series, I introduced the idea of a program as an operational hypothesis that is refined through cause-and-effect analysis. Cause-and-effect denotes a relationship. This can be a relationship between objects whose behavior can be characterized by the brute laws of physics (such as baseballs and computer chips) or organic systems (such as people and companies) that will ignore their instructions when confronted with destruction. What is universally true about these relationships is that they involve identifiably distinct entities that exchange matter and energy. The purpose of that exchange, in systems that generate value, is to provide resources that can be transformed by the receiver to solve yet another problem. In the network of cause-and-effect, there is no beginning nor end, only a system that is either sustainable or unsustainable.

The single shared characteristic of all written languages is that they are very poor representations of networks of exchange. Languages are processed sequentially, while networks manifest simultaneity. To apprehend the connectedness of events requires a graphical notation that expresses the pattern of cause-and-effect. Given the diversity of languages used to describe the behavior of system elements, we are left with a lowest-common-denominator semantics for the elements of the notation: events occur in which processors receive resources, transform them according to some method, and emit products. The reliable delivery of resources and products requires some sort of connection mechanism, which may be as simple as the dinner table, or as complex as the telecommunications system.

This is the core realization manifested in Karl Balke’s Diagrammatic Programming notation. Generalizing “resources” and “products” with “values”, the notation specifies cause-and-effect as a network of events. In each event, a processor performs a service to transform values, which are preserved and/or transferred to be available for execution of other services by the same or another processor. The services are represented as boxes that accept a specification for the action performed by the processor in terms suitable for prediction of its interaction with the values. This may be chemical reaction formulae, spoken dialog in a play, or statements in a computer programming language. The exchange of values is characterized by connections that must accommodate all possible values associated with an event. The connections are described by the values they must accommodate, and represented in the cause-and-effect network by labelled lines that link the services.

While Diagrammatic Programming notation does not require sequential execution, specification of a pattern of cause-and-effect leads inevitably to event sequencing. This does require the elimination of certain constructs from the action description. For example, DP notation contains elements that specify actions such as “wait here for a value to appear” and “analyze a value to determine what service to perform next.” When the program is converted to an executable form, processor-specific instructions are generated from the network layout.

In a properly disciplined design process, the end result is a specification of an operational hypothesis that allows the stakeholders in the implementation to negotiate their expectations. They may not be able to understand what is happening on the other side of a connection, but they can define their expectations regarding the values received by their processors. It is in through that negotiation that the space of solutions is narrowed to a form that can be subjected to engineering design.

As has become obvious in this discussion, in the context of DP analysis simple human concerns become abstracted. The technology of Diagrammatic Programming must be concerned not only with the variant perspectives of participants in the design process, but also with the perceptual capabilities of different processors, where the value “Click Here” is encoded as Unicode bytes in computer memory but appears to the user as letters on a computer display. This richness manifests in terminology and notation that requires careful study and disciplined application to ensure that a program can be elaborated into executable form.

Full implementation of the Diagrammatic Programming method was my father’s life-work, a life-work conducted by those concerned that systems serve the people that depend upon them, rather than being used for the propagation of exploitative egos. This introduction is offered in the hope that of those committed to the production of value, some may be motivated to understand and carry that work on to its completion. It is simply far too much for me to accomplish alone.

In the most detailed comparison study of its use, the following benefits were revealed: rather than spending half of my development schedule in debugging, I spent one tenth. When faced with refactoring of a module to accommodate changed requirements, the effort was simply to select the services and connections to be encapsulated, and cut-and-paste them to a new drawing. While the representation of cause-and-effect may seem a burdensome abstraction, in fact it supports methods of design and analysis that are extremely difficult to emulate on instructions specified as text.

Design by Discipline

When I received my Ph.D. in Particle Physics in 1987, I was qualified as among the wonkiest elite in science. If I had been concerned with proving that I was smart, I might have stayed in physics, but the expectations for practical applications of fundamental physics had eroded greatly after my freshman year. I wanted the larger world to benefit from the work that I did, so I took a job at a national laboratory. After a brief post-doc in fundamental physics, I moved over to environmental science. Throughout, the growing importance of computerized control and simulation meant that I enjoyed a distinct competitive advantage over my peers, as I had learned to program from one of the foremost practitioners in his generation – my father. When I became a full-time software developer, my background in physics allowed me to collaborate with engineers, to the extent that I would be brought in on engineering meetings when my peers were unavailable.

Now this may seem like just bragging, but the point is that my career has been a dynamically evolving mash-up of science, engineering and programming. My experience was filtered through a practice of systems analysis that led me to examine and control the way that those disciplines interact. So when I think about science, I don’t think about it as “what scientists do.” I do consider myself a scientist, but I do engineering and programming as well, and I perceive the three disciplines as very different activities.

I took a course on philosophy of science as an undergraduate, and I won’t drag you, dear reader, through all the definitions that have been offered. Most of them hold that Frances Bacon’s articulation of the scientific process was a magic portal for the human intellect, as though practical efficacy and the rational ordering of knowledge had not been recognized virtues among the ancients. This leads many philosophers of science to be overly concerned with truth, when what is really of interest to us as people is what has yet to be true.

The power of science is in allowing us to pierce the shadowy veil of possibility. In biology, understanding of the variety of living things and their mutual dependencies gives us the power to sustain agriculture, breed robust animals, and improve our health. Chemistry empowers us to predict the stability and properties of new substances. And physics probes the fundamental mechanisms that determine both the stability of the world around us and our ability to manipulate it.

So science provides us with pure knowledge, unconstrained by our desires or intentions. It therefore tends to attract people that are driven by curiosity. That may sound like a trivial thing, but to find order in the chaotic milieu of nature is a source of great hope. Calendars that predict the seasons allowed agricultural societies to improve their harvests and so avoid famine. The germ theory of disease motivated doctors to wash their hands, transforming hospitals from centers of disease incubation to places of healing. Scientific curiosity – to ask neurotically “why?” – is the source of great power over the world.

That power is visible in the manufactured reality all around us: the houses, roads, dams and microchips. None of these things would have existed in the natural world. The artifacts exist only because people have a purpose for them. That purpose may be as simple as cooking dinner for our children, or as grand as ensuring that the world’s knowledge is available through the internet to any person, anywhere, any time. Which of our goals are realized is largely a matter of economics: are enough people invested in the outcome that they are willing to pay to see it accomplished? We don’t have to have a kitchen in every home, but few of us can afford to go out to dinner every night, so we pay for a kitchen. The cost and delay of moving information via mail drove the growth of the internet, at an expense that I would imagine (I can’t find numbers online) has run into trillions of dollars.

Now when people invest a substantial sum of money, they want some assurance that they’ll get what they’re paying for. Appreciating that gold does not tarnish, the sultan seeking to protect the beauty of his marble dome does not want to be told, “All natural gold originates in supernovae.” Or, worse, “If we smash heavy elements together in an accelerator, we can produce ten gold atoms a day.” Those kinds of answers are acceptable in scientific circles, but they are not acceptable in the engineering world. In the engineering world, when somebody comes to you with money and a problem, your job is to design an implementation that will realize their goal.

Since we’re a species of Jones-chasers, most of the time the engineer’s job is fairly simple. People come wanting something that they’ve seen, and the challenge is to understand how it was done before and adapting the design to local conditions. But every now and then somebody comes in to ask for something completely novel. They want to build an elevator to space, for example, or create a light source that doesn’t produce soot. The engineer has no way of knowing whether such things are possible, except by reference to science.

It is into the gap between the formless knowledge of science and the concrete specifications of engineering that programming falls. Considering the light bulb, the scientists know that heated objects glow, but also burn. Applying an electric voltage to a poor conductor causes it to heat as current flows through it. The filament will burn when exposed to oxygen, so we need to isolate it from air. Using an opaque material as the barrier will also trap the generated light. However, some solids (such as glass) are transparent, and air permeates slowly through them.

The illustration is a cause-and-effect analysis. It looks at the desirable and undesirable outcomes of various scientific effects, attempting to eliminate the latter while preserving the former. The cause-and-effect analysis leads to an operational hypothesis: if we embed a wire in a glass bulb and apply a voltage, the wire will heat and emit light. This is not an engineering specification, because we don’t know how much the light bulb will cost, or how much light it will emit. But it also isn’t science, because the operational hypothesis is not known to be true. There may be no filament material that will glow brightly enough, or the required voltage may be so high that the source sparks down, or the glass may melt. But without the operational hypothesis, which I have called a “program,” engineering cannot begin.

We examined the challenge of software engineering in the first post in this series, focusing on the rapid development in the field and the difficulty in translating customer needs into terms that can be interpreted by digital processors. Today, we have arrived at a more subtle point: the algorithms written in our programming languages process information to produce information. The inputs for this process arise from nature and humans and increasingly other machines. Those inputs change constantly. Therefore very few programs (except maybe that for space probes) are deployed into predictable environments. That includes the hardware that runs the program – it may be Atom or Intel or AMD, and so the performance of software is not known a priori. For all of these reasons, every piece of software is thus simply an operational hypothesis. It is a program, not a product.

The Modern Tower of Babel

I alluded to the problem of language in my introductory post on programming. The allusion was hopeful, in that our machines are learning to understand us. Or rather, they are learning to understand those of us that speak supported languages.

The dominant language of international discourse today is English. That can be attributed to the success of the English Empire in the colonial age, and then to the industrial and diplomatic dominance of America in the aftermath of World War II. But the proliferation of English has affected the language itself.

The most significant changes impacted many of the colonial languages: they were simplified and regularized to make them easier to teach. Study of tribal languages reveals that they defy analysis. Few patterns are discerned in verb conjugations, and sentence structure obeys arbitrary rules. But the languages of major civilizations can also be daunting: the ideograms and subtle intonations of Chinese are a case in point. For both types of language, it is impossible for an adult to become fully proficient. But the education of adult leaders and manual laborers was critical to the stability of Empire. In the absorption of foreign populations, the complexity of the original language was eroded by the logistics of minority control.

And yet today the Brits like to say that England and America are divided by a common language. While the grammar and basic terms of the language are shared, cultural development and ambition still drive change. The physical sciences are characteristic. While my professors focused on physics as applied mathematics, it was clear to me that it was also a foreign language, with arcane terms such as “Newton’s Third Law”, “Lagrangian” and “Hamiltonian” use to distinguish alternative formulations of the mathematics used to describe the motion of classical particles. As cultural developments, the latter two came to prominence because their mathematical formulations were generalized more readily to non-classical systems. And as regards ambition, we need only note that all three formulations bear the name of their originators.

But language can also be used consciously as a political tool. Newt Gingrich created the modern Republican media machine around 1990 by distributing cassette tapes each month with terms to be applied in derogating Democratic and lauding Republican policies. Many oppressed minorities encode their conversations to prevent authorities from interfering with the conduct of their lives, and those can emerge as full-blown languages in their own right (The “Ebonics” movement reflected such a development in America).

But in other cases, new usage arises as a form of entertainment. I had to ask my son to clarify the meaning of “sick” as used by today’s youth, and was surprised to discover that, as in Chinese, nuances of intonation were essential to understanding.

Most of these variations can be expected to be ephemeral. “Cool” was “sick” when I was growing up, and all attempts to obscure meaning will eventually founder on the rock of economic realities. People that can’t describe accurately the world around them seem bizarre if not outright insane, and ultimately excuse themselves from collaboration with others. While the linguists are fascinated by variation, they predict that the number of living languages will continue to decline.

As a programmer, however, I have the opposite experience. Fred Brooks and Martin Fowler have decried the “language of the month” phenomenon in software engineering. I myself feel a certain insecurity in my job search because the applications that I develop can only be created using fifteen-year-old technologies that most programmers would consider to be “archaic.”

To understand the root of this proliferation, it is amusing to backtrack to 1900 or so. Mathematicians had developed categories for numbers: the integers (used for inventory tracking), rational numbers (ratios of integers) and real numbers that seemed to have no repeating pattern. Two very important branches of mathematics had been proven to depend upon real numbers: geometry and calculus. In geometry, the real number pi is the ratio of a distance across a circle and the distance around it. In calculus, Euler’s constant e is the number that when exponentiated has a slope equal to the value at every point on the curve.

However, philosophers pointed out that while calculation of the exact value of these numbers was impossible, even in the case that we could, any calculation performed using them could only be performed with finite precision – and that is good enough. If we can’t cut a board to better than one thousands of an inch, it doesn’t matter if we calculate the desired length to a billionth of an inch. Practically, the architect only needs to know pi well enough to be certain that the error in his calculation is reasonably smaller than one thousandth of an inch.

Given that binary notation could be used to represent numbers as well as common numerals, it was clear that computers could be used for practical calculations. When Alan Turing defined a simple but comprehensive model for digital computation, the field progressed confidently to construct machines for general purpose applications, encompassing not only mathematics but also language processing.

Now in Turing’s model, the digital processor operates on two kinds of input: variable data and instructions. The variable data is usually read from an input at execution. The instructions could be built into the actual structure of the processor, or read in and interpreted at run-time. The machine that Turing built to crack the Nazi Enigma code was of the first type, but his general model was of the second.

Turing’s original specification had fairly simple instructions (“move tape left”, “move tape right”, “read value” and “write value”), but it wasn’t long before Turing and others considered more complex instruction sets. While after the Trinity test, Oppenheimer famously penned a poem comparing himself to “Shiva, the destroyer of worlds”, I can’t help but wonder whether the original computer designers saw the parallels with Genesis. Here they were, building machines that they could “teach” to do work for them. They started with sand and metal and “breathed life” into it. The synaptic relays of the brain that implemented human thought have operational similarities to transistor gates. Designs that allowed the processor’s output tape to be read back as its instruction tape also suggested that processors could modify their behavior, and thus “learn.”

The Turing test for intelligence reflects clearly the ambition to create a new form of intelligent life. But creating the instruction tape as a series of operations on zeros and ones was hopelessly inefficient. So began the flourishing of computer languages. At first, these were simply mechanisms for invoking the operation of blocks of circuitry that might “add” two numbers, or “move” a collection of bits from one storage location to another. Unfortunately, while these operations provided great leverage to programmers, they addressed directly only a small part of the language of mathematics, and were hopelessly disconnected from the language used to describe everything else from banking to baking.

Still fired with ambition, the machine designers turned to the problem of translating human language to machine instructions. Here the most progress was made in the hard sciences and engineering, where languages such as FORTRAN attempted to simulate the notation of mathematical texts. The necessary imprecision of business terminology was refined as COBOL, allowing some processes to be automated. And as machine architectures grew more complex, with multi-stage memory models, communication with external peripherals including printers and disk drives, and multi-processing (where users can start independent applications that are scheduled to run sequentially), C and its variants were developed to ease the migration of operating systems code through architecture generations.

These examples illustrate the two streams of language development. The first was the goal of recognizing patterns in program structure and operation and facilitating the creation of new programs by abstracting those patterns as notation that could be “expanded” or “elaborated” by compilers (a special kind of software) into instructions to be executed by the machine. So for example, in C we type

c = a + b;

To anyone that has studied algebra, this seems straight-forward, but to elaborate this text, the compiler relies upon the ‘;’ to find complete statements. It requires a declaration elsewhere in the code of the “types” of c, a and b, and expects that the values of a and b have been defined by earlier statements. Modern compilers will report an error if any of these conditions are not met. A competent programmer has skill in satisfying these conditions to speed the development of a program.

The other stream is driven by the need to translate human language, which is inevitably imprecise, into instructions that can be executed meaningfully upon zeros and ones. Why is human language imprecise? Because more often than not we use our language to specify outcomes rather than procedures. The human environment is enormously complex and variable, and it is rare that we can simply repeat an activity mechanically and still achieve a desirable output. In part this is due to human psychology: even when the repetitions are identical, we are sensitized to the stimulus they provide. We desire variability. But more often, it is because the initial conditions change. We run out of salt, the summer rains come early, or the ore shipped to the mill contains impurities. Human programming is imprecise in part because we expect people to adapt their behavior to such variations.

Both abstraction and translation have stimulated the development of programming languages. Often, they go hand-in-hand. Systems developers expert in the use of C turn their skills to business systems development, and find that they can’t communicate with their customers. C++ arose, in part, as a method for attaching customer terminology to programming artifacts, facilitating negotiation of requirements. When the relational model was devised to organize business transaction data, SQL was developed to support analysis of that data. And when the internet protocols of HTTP and HTML became established as the means to acquire and publish SQL query results in a pretty format on the world-wide web, languages such as Ruby arose to facilitate the implementation of such transactions, which involve a large number of repetitious steps.

What is amusing about this situation is that, unlike human languages, computer languages seem to be almost impossible to kill. Consider the case of COBOL. This language approximates English sentence structure, and was widely used for business systems development in the sixties and seventies. At the time, the language designers assumed that COBOL would be replaced by better alternatives, and so adopted a date format that ran only to the end of the century. Unfortunately, the applications written in COBOL became services for other applications written in other languages. The business rationale for the logic was lost as the original customers and developers retired, and so it was effectively impossible to recreate the functionality of the COBOL applications. As the century came to a close, the popular press advertised the “Year 2000” crisis as a possible cause of world-side financial collapse. Fortunately, developers successfully isolated the code that depended upon the original date format, and made adaptations that allowed continued operation.

This trend will be magnified by the economics of software solutions delivery. Unlike other industries, almost the entire cost of a software product is in the development process. Manufacturing and distribution is almost free, and increasingly instantaneous. This means that the original developer has almost no control over the context of use, and so cannot anticipate what kinds of infrastructure will grow up around the application’s abstract capabilities.

The popular ambitions for software reflect this reality. The ability to distribute expert decision making as applications operating on increasingly precise representations of reality, all in the context of data storage that allows the results to be interpreted in light of local conditions: well, this implies that we can use software to solve any problem, anywhere. Some people talk about building networks of digital sensors that monitor everything from the weather to our respiration, and automatically deploy resources to ensure the well-being of everyone everywhere on earth.

In the original story of Babel, the people of the Earth gathered together to build a tower that would reach to heaven. Their intention was to challenge God. The mythical effort was undermined when God caused people to speak different languages, thus frustrating their ability to coordinate their efforts. In the modern era, we in effect seek to approximate the Biblical God using digital technology, but our ambitions lead us to create ever more abstract languages that we cannot rationalize, and so we find our efforts frustrated by the need to backtrack to repair our invalid assumptions.

In the terms of the programming discipline we will propose, however, the fundamental problem can be put this way: the digital realm is an abstract representation of reality. Why basis do we have for believing that the applications created using those abstractions accurately translate the needs described by human users? If we can’t solve that problem of describing and analyzing that correspondence, then our software must inevitably become a form of black magic that rules over us.

Get With the Program

My first experience with programming occurred at a Cub Scout meeting. To build our visions of the future, our den leader brought in parents to talk about the work that they did. My father came in with a set of 3×5 cards. We sat in a circle and he handed each of us a card. Following the instructions on the card, we were able to perform an operation of binary logic on two numbers. Believe me, we didn’t have a clue what we were doing, but the insights we gained into the nature of computing sure made us feel smart.

Perhaps forty-five years later, I now go by the airy title of “Software Engineer.” This is actually a fraud. By law, “engineer” is a designation reserved for technologists that have passed a rigorous assessment of capability. No such test exists for software. As one consequence, the lack of reliable practices for predicting cost and schedule for software development is a scandal. At one point, surveys of software projects reported that almost half failed.

Given the importance of software, a number of serious efforts have been made to attack the problem, spanning the full spectrum from machine design all the way up to the executive suites. Methods and tools, team dynamics, and project management were all brought under scrutiny and overhauled. Nothing has worked.

Looking back at that Cub Scout meeting, I must admit to some bemusement, because in fact what goes on in computers is exactly what we did with our 3×5 cards, except on a much larger scale and at incomprehensibly faster pace. It’s just a bunch of 0’s and 1’s flying around, with some conversion going on to allow people to digest the results of the calculation. Why should it be so hard to get a handle on the process?

Among the reasons are those that we might hope would be resolved in the foreseeable future. First is the enormously rapid pace of change in the industry. Moore’s Law meant that the difficulty of the problems that we could address grew by a factor of two every eighteen months over a span of thirty years – compounding the growth, that becomes a million times! In data storage and access, the factors are even larger. No other technology discipline comes even close to boasting these kinds of numbers. The rapid pace of change means that what was important and interesting five years ago is meaningless today. So how can we certify practices at the forefront of technology?

Of course, as has been reported in the technology press, Moore’s Law has progressed to the extent that fundamental laws of physics will impose limits on computing power in the foreseeable future. That may allow the test designers some time to catch up with progress.

A second cause for hope is that pattern processing algorithms are becoming sophisticated enough to effectively interpret human behavior. Some of those algorithms are used for gesture and speech recognition. Some are used to recognize our habits and track our schedule, allowing digital assistants to interpret our gestures and speech as specific instructions appropriate to the context. The near-term impact of this capability will be that more and more of us will become programmers, because the barrier to entry – learning how to “speak” languages understood by computers – will be removed.

But there are certain aspects of computer programming that will remain intractable, the foremost of them being trying to visualize clearly the outcome of a program’s execution.

My father Karl was given a consulting assignment in the ‘60s at a large defense contractor. The president had committed to automation of production operations, and the project was nearing completion. But while the developers were confident that the software would perform as specified, nobody had performed a comprehensive assessment of the impact on current operations. That was the task presented to Karl.

Of particular concern was the production scheduling office. The process transformed his understanding of programming. The walls of the large room were covered with clipboards hanging on hooks. Colored sheets were used to identify time-sensitive assemblies. Written manuals defined the procedures for movement of the clipboards on the walls. In many respects, it resembled the process he guided the Cub Scouts through a few years later. It was a program, and there was no way that the production automation system would be able to replicate its level of sophistication. When presented with the assessment, the president chose to force the scheduling team to use the new software, and production collapsed.

Fundamentally, software is just manipulations of zeros and ones. It is useful only to the extent that we can use the zeros and ones to represent the real world. For most of the history of computing, that meant that the principle role of the software developer was to translate the language of the human expert into terms that could be executed by a computer. When that knowledge was packaged and distributed, it meant that every local expert was replaced by a world’s expert, improving the decisions made by others, who then commissioned software solutions that stimulated improvements elsewhere, all at an ever increasing pace that frustrated the attempts by managers to actually control the end result.

What Karl realized in the scheduling office was that programs exist all around us. Some of them we call “habits”, others we call “software”, others we call “symphonic scores.” To realize the intentions of the people that commission the creation of the program, the programmer must describe the actions to be taken in terms understood by the executor of the program – whether a mechanic, a computer or a musician. What is common to each of those situations, however, is the fact that the program only specifies a pattern of execution. A spark plug can be changed in a driveway or in a repair shop. Microsoft Word can be used to write a love letter or a subpoena. A symphony can be performed on period or modern instruments. The result in each case may be subtly or grossly different, but the pattern of the execution is the same.

And there was no good way of representing such patterns. It wasn’t science, and it wasn’t engineering. “Programming” was as good a word as any to use.