Home » Science » Evolution » The Modern Tower of Babel

The Modern Tower of Babel

I alluded to the problem of language in my introductory post on programming. The allusion was hopeful, in that our machines are learning to understand us. Or rather, they are learning to understand those of us that speak supported languages.

The dominant language of international discourse today is English. That can be attributed to the success of the English Empire in the colonial age, and then to the industrial and diplomatic dominance of America in the aftermath of World War II. But the proliferation of English has affected the language itself.

The most significant changes impacted many of the colonial languages: they were simplified and regularized to make them easier to teach. Study of tribal languages reveals that they defy analysis. Few patterns are discerned in verb conjugations, and sentence structure obeys arbitrary rules. But the languages of major civilizations can also be daunting: the ideograms and subtle intonations of Chinese are a case in point. For both types of language, it is impossible for an adult to become fully proficient. But the education of adult leaders and manual laborers was critical to the stability of Empire. In the absorption of foreign populations, the complexity of the original language was eroded by the logistics of minority control.

And yet today the Brits like to say that England and America are divided by a common language. While the grammar and basic terms of the language are shared, cultural development and ambition still drive change. The physical sciences are characteristic. While my professors focused on physics as applied mathematics, it was clear to me that it was also a foreign language, with arcane terms such as “Newton’s Third Law”, “Lagrangian” and “Hamiltonian” use to distinguish alternative formulations of the mathematics used to describe the motion of classical particles. As cultural developments, the latter two came to prominence because their mathematical formulations were generalized more readily to non-classical systems. And as regards ambition, we need only note that all three formulations bear the name of their originators.

But language can also be used consciously as a political tool. Newt Gingrich created the modern Republican media machine around 1990 by distributing cassette tapes each month with terms to be applied in derogating Democratic and lauding Republican policies. Many oppressed minorities encode their conversations to prevent authorities from interfering with the conduct of their lives, and those can emerge as full-blown languages in their own right (The “Ebonics” movement reflected such a development in America).

But in other cases, new usage arises as a form of entertainment. I had to ask my son to clarify the meaning of “sick” as used by today’s youth, and was surprised to discover that, as in Chinese, nuances of intonation were essential to understanding.

Most of these variations can be expected to be ephemeral. “Cool” was “sick” when I was growing up, and all attempts to obscure meaning will eventually founder on the rock of economic realities. People that can’t describe accurately the world around them seem bizarre if not outright insane, and ultimately excuse themselves from collaboration with others. While the linguists are fascinated by variation, they predict that the number of living languages will continue to decline.

As a programmer, however, I have the opposite experience. Fred Brooks and Martin Fowler have decried the “language of the month” phenomenon in software engineering. I myself feel a certain insecurity in my job search because the applications that I develop can only be created using fifteen-year-old technologies that most programmers would consider to be “archaic.”

To understand the root of this proliferation, it is amusing to backtrack to 1900 or so. Mathematicians had developed categories for numbers: the integers (used for inventory tracking), rational numbers (ratios of integers) and real numbers that seemed to have no repeating pattern. Two very important branches of mathematics had been proven to depend upon real numbers: geometry and calculus. In geometry, the real number pi is the ratio of a distance across a circle and the distance around it. In calculus, Euler’s constant e is the number that when exponentiated has a slope equal to the value at every point on the curve.

However, philosophers pointed out that while calculation of the exact value of these numbers was impossible, even in the case that we could, any calculation performed using them could only be performed with finite precision – and that is good enough. If we can’t cut a board to better than one thousands of an inch, it doesn’t matter if we calculate the desired length to a billionth of an inch. Practically, the architect only needs to know pi well enough to be certain that the error in his calculation is reasonably smaller than one thousandth of an inch.

Given that binary notation could be used to represent numbers as well as common numerals, it was clear that computers could be used for practical calculations. When Alan Turing defined a simple but comprehensive model for digital computation, the field progressed confidently to construct machines for general purpose applications, encompassing not only mathematics but also language processing.

Now in Turing’s model, the digital processor operates on two kinds of input: variable data and instructions. The variable data is usually read from an input at execution. The instructions could be built into the actual structure of the processor, or read in and interpreted at run-time. The machine that Turing built to crack the Nazi Enigma code was of the first type, but his general model was of the second.

Turing’s original specification had fairly simple instructions (“move tape left”, “move tape right”, “read value” and “write value”), but it wasn’t long before Turing and others considered more complex instruction sets. While after the Trinity test, Oppenheimer famously penned a poem comparing himself to “Shiva, the destroyer of worlds”, I can’t help but wonder whether the original computer designers saw the parallels with Genesis. Here they were, building machines that they could “teach” to do work for them. They started with sand and metal and “breathed life” into it. The synaptic relays of the brain that implemented human thought have operational similarities to transistor gates. Designs that allowed the processor’s output tape to be read back as its instruction tape also suggested that processors could modify their behavior, and thus “learn.”

The Turing test for intelligence reflects clearly the ambition to create a new form of intelligent life. But creating the instruction tape as a series of operations on zeros and ones was hopelessly inefficient. So began the flourishing of computer languages. At first, these were simply mechanisms for invoking the operation of blocks of circuitry that might “add” two numbers, or “move” a collection of bits from one storage location to another. Unfortunately, while these operations provided great leverage to programmers, they addressed directly only a small part of the language of mathematics, and were hopelessly disconnected from the language used to describe everything else from banking to baking.

Still fired with ambition, the machine designers turned to the problem of translating human language to machine instructions. Here the most progress was made in the hard sciences and engineering, where languages such as FORTRAN attempted to simulate the notation of mathematical texts. The necessary imprecision of business terminology was refined as COBOL, allowing some processes to be automated. And as machine architectures grew more complex, with multi-stage memory models, communication with external peripherals including printers and disk drives, and multi-processing (where users can start independent applications that are scheduled to run sequentially), C and its variants were developed to ease the migration of operating systems code through architecture generations.

These examples illustrate the two streams of language development. The first was the goal of recognizing patterns in program structure and operation and facilitating the creation of new programs by abstracting those patterns as notation that could be “expanded” or “elaborated” by compilers (a special kind of software) into instructions to be executed by the machine. So for example, in C we type

c = a + b;

To anyone that has studied algebra, this seems straight-forward, but to elaborate this text, the compiler relies upon the ‘;’ to find complete statements. It requires a declaration elsewhere in the code of the “types” of c, a and b, and expects that the values of a and b have been defined by earlier statements. Modern compilers will report an error if any of these conditions are not met. A competent programmer has skill in satisfying these conditions to speed the development of a program.

The other stream is driven by the need to translate human language, which is inevitably imprecise, into instructions that can be executed meaningfully upon zeros and ones. Why is human language imprecise? Because more often than not we use our language to specify outcomes rather than procedures. The human environment is enormously complex and variable, and it is rare that we can simply repeat an activity mechanically and still achieve a desirable output. In part this is due to human psychology: even when the repetitions are identical, we are sensitized to the stimulus they provide. We desire variability. But more often, it is because the initial conditions change. We run out of salt, the summer rains come early, or the ore shipped to the mill contains impurities. Human programming is imprecise in part because we expect people to adapt their behavior to such variations.

Both abstraction and translation have stimulated the development of programming languages. Often, they go hand-in-hand. Systems developers expert in the use of C turn their skills to business systems development, and find that they can’t communicate with their customers. C++ arose, in part, as a method for attaching customer terminology to programming artifacts, facilitating negotiation of requirements. When the relational model was devised to organize business transaction data, SQL was developed to support analysis of that data. And when the internet protocols of HTTP and HTML became established as the means to acquire and publish SQL query results in a pretty format on the world-wide web, languages such as Ruby arose to facilitate the implementation of such transactions, which involve a large number of repetitious steps.

What is amusing about this situation is that, unlike human languages, computer languages seem to be almost impossible to kill. Consider the case of COBOL. This language approximates English sentence structure, and was widely used for business systems development in the sixties and seventies. At the time, the language designers assumed that COBOL would be replaced by better alternatives, and so adopted a date format that ran only to the end of the century. Unfortunately, the applications written in COBOL became services for other applications written in other languages. The business rationale for the logic was lost as the original customers and developers retired, and so it was effectively impossible to recreate the functionality of the COBOL applications. As the century came to a close, the popular press advertised the “Year 2000” crisis as a possible cause of world-side financial collapse. Fortunately, developers successfully isolated the code that depended upon the original date format, and made adaptations that allowed continued operation.

This trend will be magnified by the economics of software solutions delivery. Unlike other industries, almost the entire cost of a software product is in the development process. Manufacturing and distribution is almost free, and increasingly instantaneous. This means that the original developer has almost no control over the context of use, and so cannot anticipate what kinds of infrastructure will grow up around the application’s abstract capabilities.

The popular ambitions for software reflect this reality. The ability to distribute expert decision making as applications operating on increasingly precise representations of reality, all in the context of data storage that allows the results to be interpreted in light of local conditions: well, this implies that we can use software to solve any problem, anywhere. Some people talk about building networks of digital sensors that monitor everything from the weather to our respiration, and automatically deploy resources to ensure the well-being of everyone everywhere on earth.

In the original story of Babel, the people of the Earth gathered together to build a tower that would reach to heaven. Their intention was to challenge God. The mythical effort was undermined when God caused people to speak different languages, thus frustrating their ability to coordinate their efforts. In the modern era, we in effect seek to approximate the Biblical God using digital technology, but our ambitions lead us to create ever more abstract languages that we cannot rationalize, and so we find our efforts frustrated by the need to backtrack to repair our invalid assumptions.

In the terms of the programming discipline we will propose, however, the fundamental problem can be put this way: the digital realm is an abstract representation of reality. Why basis do we have for believing that the applications created using those abstractions accurately translate the needs described by human users? If we can’t solve that problem of describing and analyzing that correspondence, then our software must inevitably become a form of black magic that rules over us.

2 thoughts on “The Modern Tower of Babel

  1. Very insightful article on the house of cards that modern programming is built upon. Particularly unsettling, if reliance on AI increases at current rates. Unfortunately, the other extreme of the self learning / correcting AI is equally or even more worrisome as well. Wondering if there is any common framework being developed, implemented to re-document / deal with / monitor both these issues, or if such a framework is actually possible…

    • Ouch! You’ve raised another aspect of the problem. As I progress with this series, I’m going to elevate an approach that will help to establish the correspondence between user expectations and software behavior.

      With that, we can attempt to do things with AI systems such as we do with people: limit their access to energy when they do things that we don’t like. The challenges are that systems under AI control make decisions far faster than people can monitor, and networked software services form an “ecosystem.” Considering the latter, when we pull the plug on a service we can trigger an “extinction episode.” Google is known to be a bad actor in this regard, putting up services that they pull down when it becomes obvious that there’s no profit potential.

      One way to address the problem is to do what the Security and Exchange Commission is doing: create guardian AI systems that monitor the behavior of other AI systems. This has been used to detect patterns of collusion in those that engage in electronic trading. If the “guardian” machines were given only the correspondence analysis mentioned above, they could enforce the contract between software and users, and not really do much else.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s