The Second Coming of Donald

Common interpretation of Revelation 11:15 is that the reign of Christ begins when Gabriel sounds his horn. Now I offer an alternative interpretation of the verse in The Soul Comes First as heralding the beginning of the age of Humanity who will bring redemption to the Earth through the intelligent exercise of divine love.

But you, know, scripture is inscrutable, and I’m beginning to realize that maybe we’ve all misunderstood.

Gabriel is known as the angel that transmits God’s truth. FOX news broadcasts “God’s truth.” A trumpet is a kind of horn. In the first Republican debate on FOX news, we saw nine Trump-ettes on the stage with Donald.

Hallelujah! Praise the Lord! Jesus will be outed by the FOXing of Donald!

Of course, NBC will carry the coverage. Looks like FOX out-foxed itself.

Design by Discipline

When I received my Ph.D. in Particle Physics in 1987, I was qualified as among the wonkiest elite in science. If I had been concerned with proving that I was smart, I might have stayed in physics, but the expectations for practical applications of fundamental physics had eroded greatly after my freshman year. I wanted the larger world to benefit from the work that I did, so I took a job at a national laboratory. After a brief post-doc in fundamental physics, I moved over to environmental science. Throughout, the growing importance of computerized control and simulation meant that I enjoyed a distinct competitive advantage over my peers, as I had learned to program from one of the foremost practitioners in his generation – my father. When I became a full-time software developer, my background in physics allowed me to collaborate with engineers, to the extent that I would be brought in on engineering meetings when my peers were unavailable.

Now this may seem like just bragging, but the point is that my career has been a dynamically evolving mash-up of science, engineering and programming. My experience was filtered through a practice of systems analysis that led me to examine and control the way that those disciplines interact. So when I think about science, I don’t think about it as “what scientists do.” I do consider myself a scientist, but I do engineering and programming as well, and I perceive the three disciplines as very different activities.

I took a course on philosophy of science as an undergraduate, and I won’t drag you, dear reader, through all the definitions that have been offered. Most of them hold that Frances Bacon’s articulation of the scientific process was a magic portal for the human intellect, as though practical efficacy and the rational ordering of knowledge had not been recognized virtues among the ancients. This leads many philosophers of science to be overly concerned with truth, when what is really of interest to us as people is what has yet to be true.

The power of science is in allowing us to pierce the shadowy veil of possibility. In biology, understanding of the variety of living things and their mutual dependencies gives us the power to sustain agriculture, breed robust animals, and improve our health. Chemistry empowers us to predict the stability and properties of new substances. And physics probes the fundamental mechanisms that determine both the stability of the world around us and our ability to manipulate it.

So science provides us with pure knowledge, unconstrained by our desires or intentions. It therefore tends to attract people that are driven by curiosity. That may sound like a trivial thing, but to find order in the chaotic milieu of nature is a source of great hope. Calendars that predict the seasons allowed agricultural societies to improve their harvests and so avoid famine. The germ theory of disease motivated doctors to wash their hands, transforming hospitals from centers of disease incubation to places of healing. Scientific curiosity – to ask neurotically “why?” – is the source of great power over the world.

That power is visible in the manufactured reality all around us: the houses, roads, dams and microchips. None of these things would have existed in the natural world. The artifacts exist only because people have a purpose for them. That purpose may be as simple as cooking dinner for our children, or as grand as ensuring that the world’s knowledge is available through the internet to any person, anywhere, any time. Which of our goals are realized is largely a matter of economics: are enough people invested in the outcome that they are willing to pay to see it accomplished? We don’t have to have a kitchen in every home, but few of us can afford to go out to dinner every night, so we pay for a kitchen. The cost and delay of moving information via mail drove the growth of the internet, at an expense that I would imagine (I can’t find numbers online) has run into trillions of dollars.

Now when people invest a substantial sum of money, they want some assurance that they’ll get what they’re paying for. Appreciating that gold does not tarnish, the sultan seeking to protect the beauty of his marble dome does not want to be told, “All natural gold originates in supernovae.” Or, worse, “If we smash heavy elements together in an accelerator, we can produce ten gold atoms a day.” Those kinds of answers are acceptable in scientific circles, but they are not acceptable in the engineering world. In the engineering world, when somebody comes to you with money and a problem, your job is to design an implementation that will realize their goal.

Since we’re a species of Jones-chasers, most of the time the engineer’s job is fairly simple. People come wanting something that they’ve seen, and the challenge is to understand how it was done before and adapting the design to local conditions. But every now and then somebody comes in to ask for something completely novel. They want to build an elevator to space, for example, or create a light source that doesn’t produce soot. The engineer has no way of knowing whether such things are possible, except by reference to science.

It is into the gap between the formless knowledge of science and the concrete specifications of engineering that programming falls. Considering the light bulb, the scientists know that heated objects glow, but also burn. Applying an electric voltage to a poor conductor causes it to heat as current flows through it. The filament will burn when exposed to oxygen, so we need to isolate it from air. Using an opaque material as the barrier will also trap the generated light. However, some solids (such as glass) are transparent, and air permeates slowly through them.

The illustration is a cause-and-effect analysis. It looks at the desirable and undesirable outcomes of various scientific effects, attempting to eliminate the latter while preserving the former. The cause-and-effect analysis leads to an operational hypothesis: if we embed a wire in a glass bulb and apply a voltage, the wire will heat and emit light. This is not an engineering specification, because we don’t know how much the light bulb will cost, or how much light it will emit. But it also isn’t science, because the operational hypothesis is not known to be true. There may be no filament material that will glow brightly enough, or the required voltage may be so high that the source sparks down, or the glass may melt. But without the operational hypothesis, which I have called a “program,” engineering cannot begin.

We examined the challenge of software engineering in the first post in this series, focusing on the rapid development in the field and the difficulty in translating customer needs into terms that can be interpreted by digital processors. Today, we have arrived at a more subtle point: the algorithms written in our programming languages process information to produce information. The inputs for this process arise from nature and humans and increasingly other machines. Those inputs change constantly. Therefore very few programs (except maybe that for space probes) are deployed into predictable environments. That includes the hardware that runs the program – it may be Atom or Intel or AMD, and so the performance of software is not known a priori. For all of these reasons, every piece of software is thus simply an operational hypothesis. It is a program, not a product.

The Modern Tower of Babel

I alluded to the problem of language in my introductory post on programming. The allusion was hopeful, in that our machines are learning to understand us. Or rather, they are learning to understand those of us that speak supported languages.

The dominant language of international discourse today is English. That can be attributed to the success of the English Empire in the colonial age, and then to the industrial and diplomatic dominance of America in the aftermath of World War II. But the proliferation of English has affected the language itself.

The most significant changes impacted many of the colonial languages: they were simplified and regularized to make them easier to teach. Study of tribal languages reveals that they defy analysis. Few patterns are discerned in verb conjugations, and sentence structure obeys arbitrary rules. But the languages of major civilizations can also be daunting: the ideograms and subtle intonations of Chinese are a case in point. For both types of language, it is impossible for an adult to become fully proficient. But the education of adult leaders and manual laborers was critical to the stability of Empire. In the absorption of foreign populations, the complexity of the original language was eroded by the logistics of minority control.

And yet today the Brits like to say that England and America are divided by a common language. While the grammar and basic terms of the language are shared, cultural development and ambition still drive change. The physical sciences are characteristic. While my professors focused on physics as applied mathematics, it was clear to me that it was also a foreign language, with arcane terms such as “Newton’s Third Law”, “Lagrangian” and “Hamiltonian” use to distinguish alternative formulations of the mathematics used to describe the motion of classical particles. As cultural developments, the latter two came to prominence because their mathematical formulations were generalized more readily to non-classical systems. And as regards ambition, we need only note that all three formulations bear the name of their originators.

But language can also be used consciously as a political tool. Newt Gingrich created the modern Republican media machine around 1990 by distributing cassette tapes each month with terms to be applied in derogating Democratic and lauding Republican policies. Many oppressed minorities encode their conversations to prevent authorities from interfering with the conduct of their lives, and those can emerge as full-blown languages in their own right (The “Ebonics” movement reflected such a development in America).

But in other cases, new usage arises as a form of entertainment. I had to ask my son to clarify the meaning of “sick” as used by today’s youth, and was surprised to discover that, as in Chinese, nuances of intonation were essential to understanding.

Most of these variations can be expected to be ephemeral. “Cool” was “sick” when I was growing up, and all attempts to obscure meaning will eventually founder on the rock of economic realities. People that can’t describe accurately the world around them seem bizarre if not outright insane, and ultimately excuse themselves from collaboration with others. While the linguists are fascinated by variation, they predict that the number of living languages will continue to decline.

As a programmer, however, I have the opposite experience. Fred Brooks and Martin Fowler have decried the “language of the month” phenomenon in software engineering. I myself feel a certain insecurity in my job search because the applications that I develop can only be created using fifteen-year-old technologies that most programmers would consider to be “archaic.”

To understand the root of this proliferation, it is amusing to backtrack to 1900 or so. Mathematicians had developed categories for numbers: the integers (used for inventory tracking), rational numbers (ratios of integers) and real numbers that seemed to have no repeating pattern. Two very important branches of mathematics had been proven to depend upon real numbers: geometry and calculus. In geometry, the real number pi is the ratio of a distance across a circle and the distance around it. In calculus, Euler’s constant e is the number that when exponentiated has a slope equal to the value at every point on the curve.

However, philosophers pointed out that while calculation of the exact value of these numbers was impossible, even in the case that we could, any calculation performed using them could only be performed with finite precision – and that is good enough. If we can’t cut a board to better than one thousands of an inch, it doesn’t matter if we calculate the desired length to a billionth of an inch. Practically, the architect only needs to know pi well enough to be certain that the error in his calculation is reasonably smaller than one thousandth of an inch.

Given that binary notation could be used to represent numbers as well as common numerals, it was clear that computers could be used for practical calculations. When Alan Turing defined a simple but comprehensive model for digital computation, the field progressed confidently to construct machines for general purpose applications, encompassing not only mathematics but also language processing.

Now in Turing’s model, the digital processor operates on two kinds of input: variable data and instructions. The variable data is usually read from an input at execution. The instructions could be built into the actual structure of the processor, or read in and interpreted at run-time. The machine that Turing built to crack the Nazi Enigma code was of the first type, but his general model was of the second.

Turing’s original specification had fairly simple instructions (“move tape left”, “move tape right”, “read value” and “write value”), but it wasn’t long before Turing and others considered more complex instruction sets. While after the Trinity test, Oppenheimer famously penned a poem comparing himself to “Shiva, the destroyer of worlds”, I can’t help but wonder whether the original computer designers saw the parallels with Genesis. Here they were, building machines that they could “teach” to do work for them. They started with sand and metal and “breathed life” into it. The synaptic relays of the brain that implemented human thought have operational similarities to transistor gates. Designs that allowed the processor’s output tape to be read back as its instruction tape also suggested that processors could modify their behavior, and thus “learn.”

The Turing test for intelligence reflects clearly the ambition to create a new form of intelligent life. But creating the instruction tape as a series of operations on zeros and ones was hopelessly inefficient. So began the flourishing of computer languages. At first, these were simply mechanisms for invoking the operation of blocks of circuitry that might “add” two numbers, or “move” a collection of bits from one storage location to another. Unfortunately, while these operations provided great leverage to programmers, they addressed directly only a small part of the language of mathematics, and were hopelessly disconnected from the language used to describe everything else from banking to baking.

Still fired with ambition, the machine designers turned to the problem of translating human language to machine instructions. Here the most progress was made in the hard sciences and engineering, where languages such as FORTRAN attempted to simulate the notation of mathematical texts. The necessary imprecision of business terminology was refined as COBOL, allowing some processes to be automated. And as machine architectures grew more complex, with multi-stage memory models, communication with external peripherals including printers and disk drives, and multi-processing (where users can start independent applications that are scheduled to run sequentially), C and its variants were developed to ease the migration of operating systems code through architecture generations.

These examples illustrate the two streams of language development. The first was the goal of recognizing patterns in program structure and operation and facilitating the creation of new programs by abstracting those patterns as notation that could be “expanded” or “elaborated” by compilers (a special kind of software) into instructions to be executed by the machine. So for example, in C we type

c = a + b;

To anyone that has studied algebra, this seems straight-forward, but to elaborate this text, the compiler relies upon the ‘;’ to find complete statements. It requires a declaration elsewhere in the code of the “types” of c, a and b, and expects that the values of a and b have been defined by earlier statements. Modern compilers will report an error if any of these conditions are not met. A competent programmer has skill in satisfying these conditions to speed the development of a program.

The other stream is driven by the need to translate human language, which is inevitably imprecise, into instructions that can be executed meaningfully upon zeros and ones. Why is human language imprecise? Because more often than not we use our language to specify outcomes rather than procedures. The human environment is enormously complex and variable, and it is rare that we can simply repeat an activity mechanically and still achieve a desirable output. In part this is due to human psychology: even when the repetitions are identical, we are sensitized to the stimulus they provide. We desire variability. But more often, it is because the initial conditions change. We run out of salt, the summer rains come early, or the ore shipped to the mill contains impurities. Human programming is imprecise in part because we expect people to adapt their behavior to such variations.

Both abstraction and translation have stimulated the development of programming languages. Often, they go hand-in-hand. Systems developers expert in the use of C turn their skills to business systems development, and find that they can’t communicate with their customers. C++ arose, in part, as a method for attaching customer terminology to programming artifacts, facilitating negotiation of requirements. When the relational model was devised to organize business transaction data, SQL was developed to support analysis of that data. And when the internet protocols of HTTP and HTML became established as the means to acquire and publish SQL query results in a pretty format on the world-wide web, languages such as Ruby arose to facilitate the implementation of such transactions, which involve a large number of repetitious steps.

What is amusing about this situation is that, unlike human languages, computer languages seem to be almost impossible to kill. Consider the case of COBOL. This language approximates English sentence structure, and was widely used for business systems development in the sixties and seventies. At the time, the language designers assumed that COBOL would be replaced by better alternatives, and so adopted a date format that ran only to the end of the century. Unfortunately, the applications written in COBOL became services for other applications written in other languages. The business rationale for the logic was lost as the original customers and developers retired, and so it was effectively impossible to recreate the functionality of the COBOL applications. As the century came to a close, the popular press advertised the “Year 2000” crisis as a possible cause of world-side financial collapse. Fortunately, developers successfully isolated the code that depended upon the original date format, and made adaptations that allowed continued operation.

This trend will be magnified by the economics of software solutions delivery. Unlike other industries, almost the entire cost of a software product is in the development process. Manufacturing and distribution is almost free, and increasingly instantaneous. This means that the original developer has almost no control over the context of use, and so cannot anticipate what kinds of infrastructure will grow up around the application’s abstract capabilities.

The popular ambitions for software reflect this reality. The ability to distribute expert decision making as applications operating on increasingly precise representations of reality, all in the context of data storage that allows the results to be interpreted in light of local conditions: well, this implies that we can use software to solve any problem, anywhere. Some people talk about building networks of digital sensors that monitor everything from the weather to our respiration, and automatically deploy resources to ensure the well-being of everyone everywhere on earth.

In the original story of Babel, the people of the Earth gathered together to build a tower that would reach to heaven. Their intention was to challenge God. The mythical effort was undermined when God caused people to speak different languages, thus frustrating their ability to coordinate their efforts. In the modern era, we in effect seek to approximate the Biblical God using digital technology, but our ambitions lead us to create ever more abstract languages that we cannot rationalize, and so we find our efforts frustrated by the need to backtrack to repair our invalid assumptions.

In the terms of the programming discipline we will propose, however, the fundamental problem can be put this way: the digital realm is an abstract representation of reality. Why basis do we have for believing that the applications created using those abstractions accurately translate the needs described by human users? If we can’t solve that problem of describing and analyzing that correspondence, then our software must inevitably become a form of black magic that rules over us.

Military Truth-in-Action

I’m just realizing that the military, confronted with the option of either going to war with Iran or supporting the implementation of the multi-national nuclear technology agreement with Iran, is strongly motivated to shift its loyalties from the Republicans to the Democrats in this election cycle.

What do the Republicans not understand about getting the nuclear issue off the table so that we can start grinding Iran down for it’s activities fomenting terrorism against our allies in the Middle East? Is that really so difficult to understand?

One step at a time. All that your blustering is going to do is upset the apple cart.

The Blood of the Innocent

I was winding my evening up, thinking about how to organize my next post on programming, when I got a notice from MSN of the truck bombing in Sadr City in Baghdad. It turned my thoughts back to yesterday’s topic.

In the aftermath of Hussein’s arrest, I had a dream about Muqtada Al Sadr, the “firebrand” cleric whose father had been assassinated in the south of Iraq for his outspoken opposition to the regime. Muqtada and his Shia militia had been playing a game of cat-and-mouse with the occupying forces, attempting to wear out US resolve. In the dream, he railed against the hypocrisy of American intervention, seeing it as merely a far more active example of the means we use throughout the world to secure our corrupt lifestyle.

I did not dispute his point, only offering “But Osama is right. If Muslims lived according to the Qur’an, what America did wouldn’t make a difference.” I waited while the point sank in, and then asked “So tell me, what is the source of your anger?”

And I was down on the street with him as a wailing mother carried to him the daughter that had died of starvation.

“Everyone mourns the death of a child.” I laid in my bed and wept, and when the tears stopped, showed him my own burdens. “It’s not possible to prevent suffering in the world. The role of the spiritual leader is rather to guide the beloved community away from anger and fear by turning their thoughts toward the miracle of healing.”

The situation in the Middle East demands enormous strength from those such as Ali Sistani and Al Sadr. I see the region going through the exercise that Europe pursued in the first half of the twentieth century. Europe in 1900 was a continent full of peoples that hated each other. It wasn’t limited to the Jews – the Jews simply didn’t have an army. World War I was inevitable due to the interlocking and contradictory alliances of convenience that triggered a general mobilization following the assassination of Archduke Ferdinand. The Treaty of Versaille and subsequent blockade of German ports were a bloody cross borne by the German people for the continent’s hypocritical great power politics.

World War I is my model for the Middle East. The conflict is not waged trench-by-trench under the barrage of artillery, but street-by-bloody street after the truck bombs explode. As in Europe, it is a cancerous explosion of violence perpetrated by men lacking the skills and imagination to succeed in productive collaboration with their neighbors. It is a cancer fed by the cowardice of leaders that surround themselves with their ethnic peers for fear of bringing the enemy too close.

The resolution in Europe, after fifty years, was brought only by the complete destruction of the industrial economies of the continent. The nations of Europe realized that there were no longer winners in wars. Today it is even worse: modern chemistry makes it too easy to create weapons, and the accumulated grief of the Middle East provides a steady stream of suicidal delivery men.

So what can America do? Until the leaders of the region agree to intervene to create peace, little except to try to brake the spread of the disease. Among the recognized governments, that may include creating dependency on advanced weapons systems that require frequent maintenance using expensive parts sourced from America. Another means is to organize economic sanctions against rogue states. Finally, we can wait for the violence to turn inwards, creating a new generation of martyrs whose avengers help us target the leaders of extremist movements.

There are no grand gestures here, no quick fixes. It’s a long grind against evil, by an American people and government that give the world plenty of reason not to trust us. But as was demonstrated in the Cold War, the Philippines and South Africa, it’s the only material means of foreign policy that will effect change.

And for those without access to those mechanisms: Pray. Open your hearts to their suffering. Will them to receive the best of your strength, faith and wisdom. It makes a difference, in ways that cannot be proven. In the face of all the reasons they have to fear, ultimately our compassion is the only way of bringing courage to the citizens that must find solutions in the Middle East.

Bushmongering

Trapped between a rock and a hard place by the legacy of his brother’s War in Iraq, Jeb Bush delivered a speech at the Reagan Library in Simi Valley (I wasn’t invited) that followed the pattern of all self-rationalizing bullies: blame the victim.

Hillary was First Lady during the transition to Jr’s Administration. The Cole destroyer had been holed by a floating IED, and the Clinton team had determined that Al Qaeda was certainly the culprit. The defense briefings implored the Bush team to send a strong message to the perpetrators, but Karl Rove’s political calculationn was that the incident was something that could be painted as a Democratic legacy.

Instead, the Bush team set about antagonizing both allies and adversaries with strong-armed attempts to modify the interpretation of arms limitations treaties to allow deployment of a nuclear missile shield. The week before 9/11, Tom Daschle, leader of the Democratic majority in the Senate, called a press conference on the Capitol steps to voice his concerns that the Bush team did not understand the geopolitical threat posed by Islamic extremists. Later reporting indeed revealed that American withdrawals in Beirut and Somalia were capped by the failure to take action after Cole. Osama bin Ladin believed that America was morally weak, and that one further blow would cause us to curl up and hide from the world.

The Bush team’s incompetence and short-sightedness was compounded in the run-up to the Iraq War. The false claim of yellow-cake trading with Niger was the linchpin of the “weapons of mass destruction” case against Saddam Hussein. When Joe Wilson, former Ambassador to Niger, stood up to dispute the claims, the Bush Administration outed the CIA’s head of nuclear threat control – Valerie Plame, who happened to be Wilson’s wife.

While the conquest of Iraq was a military masterpiece, the weakness of the planning for the peace was evident. Despite the “Mission Accomplished” announcements, the tangled web of Iraqi ethnic resentments provided rich soil for Al Qaeda sympathizers. The nation began to collapse, and the Bush team kept National Reservists in the theater and called up large numbers of additional troops in a “Surge” that finally allowed Iraq to return to self-government.

Since then, the Obama administration’s policy has been to disengage slowly, providing time and incentives for the Iraqi nation to stand on its own two feet. It hasn’t been a pretty picture.

At root, what Jr’s Administration revealed was the danger of disengaging from reality – of treating all foreign policy decisions first and foremost as domestic political decisions. The Democratic response was to serve as the loyal opposition to the nation’s commander-in-chief. They swallowed their complaints and criticism, and focused on trying to ensure that damage was minimized and lessons were learned.

So what about Jeb’s claims that the Obama administration was culpable in the rise of ISIS? How sophisticated a view of foreign policy do they represent?

Well, I would assert “naive to the point of dangerous.” Bush calls, for example, for arming of the Kurds. That can only antagonize Turkey, which has seen 40,000 casualties in a decades-long struggle for Kurdish independence. Turkey’s president Erdogan was apparently a supporter of IS until attempts to control the activities of Sunni extremists lead to a number of bombings. So, no, he’s not a reliable ally, but there’s no reason to push him into the arms of IS.

Or the claim that the Obama Administration didn’t take strong initial action against Islamic State (IS)? Far enough, in 20/20 hindsight. IS grew out of the Syrian civil war, which started as a rebellion against a leader guilty of crimes against humanity, but became a global lightening rod for militant extremists as it dragged on.

The nature and ambitions of IS were not obvious until defectors revealed that operations were actually being guided in secret by Sadaam’s Baathist generals. The initial IS surge was so successful because it exploited Sunni resentment against Shia dominance of Iraq’s government, with many of the early atrocities committed against Shia troops guarding the peace in Western Iraq.

The policies stated by Bush would be to bring additional American troops and material back into the region. That makes sense, except that the most potent weapon in the IS arsenal are suicide bombs crafted from Humvees captured from Iraqi bases. Until the Iraqi security forces demonstrate the resolve to engage the enemy, unless American commits indefinitely to a military presence, IS will simply fade into the civilian population, only to appear again after we leave to take advantage of the resources we leave behind.

And the final charge that Clinton didn’t visit Iraq during her tenure at State: well, there was no State Department presence. The entire operation was run out of the Department of Defense. What would have been the point of starting a turf war?

I understand that in domestic politics, the best defense is always a strong offense. It was perhaps to be expected that Bush would mount his attack against the Democratic front-runner. But what the tone and substance of the attack reveals is a dangerous lack of understanding of the issues. Given the documented history, Hillary will clean his clock in the run-up to the general election, or we’ll find ourselves suffering at the hands of the government we deserve.

Get With the Program

My first experience with programming occurred at a Cub Scout meeting. To build our visions of the future, our den leader brought in parents to talk about the work that they did. My father came in with a set of 3×5 cards. We sat in a circle and he handed each of us a card. Following the instructions on the card, we were able to perform an operation of binary logic on two numbers. Believe me, we didn’t have a clue what we were doing, but the insights we gained into the nature of computing sure made us feel smart.

Perhaps forty-five years later, I now go by the airy title of “Software Engineer.” This is actually a fraud. By law, “engineer” is a designation reserved for technologists that have passed a rigorous assessment of capability. No such test exists for software. As one consequence, the lack of reliable practices for predicting cost and schedule for software development is a scandal. At one point, surveys of software projects reported that almost half failed.

Given the importance of software, a number of serious efforts have been made to attack the problem, spanning the full spectrum from machine design all the way up to the executive suites. Methods and tools, team dynamics, and project management were all brought under scrutiny and overhauled. Nothing has worked.

Looking back at that Cub Scout meeting, I must admit to some bemusement, because in fact what goes on in computers is exactly what we did with our 3×5 cards, except on a much larger scale and at incomprehensibly faster pace. It’s just a bunch of 0’s and 1’s flying around, with some conversion going on to allow people to digest the results of the calculation. Why should it be so hard to get a handle on the process?

Among the reasons are those that we might hope would be resolved in the foreseeable future. First is the enormously rapid pace of change in the industry. Moore’s Law meant that the difficulty of the problems that we could address grew by a factor of two every eighteen months over a span of thirty years – compounding the growth, that becomes a million times! In data storage and access, the factors are even larger. No other technology discipline comes even close to boasting these kinds of numbers. The rapid pace of change means that what was important and interesting five years ago is meaningless today. So how can we certify practices at the forefront of technology?

Of course, as has been reported in the technology press, Moore’s Law has progressed to the extent that fundamental laws of physics will impose limits on computing power in the foreseeable future. That may allow the test designers some time to catch up with progress.

A second cause for hope is that pattern processing algorithms are becoming sophisticated enough to effectively interpret human behavior. Some of those algorithms are used for gesture and speech recognition. Some are used to recognize our habits and track our schedule, allowing digital assistants to interpret our gestures and speech as specific instructions appropriate to the context. The near-term impact of this capability will be that more and more of us will become programmers, because the barrier to entry – learning how to “speak” languages understood by computers – will be removed.

But there are certain aspects of computer programming that will remain intractable, the foremost of them being trying to visualize clearly the outcome of a program’s execution.

My father Karl was given a consulting assignment in the ‘60s at a large defense contractor. The president had committed to automation of production operations, and the project was nearing completion. But while the developers were confident that the software would perform as specified, nobody had performed a comprehensive assessment of the impact on current operations. That was the task presented to Karl.

Of particular concern was the production scheduling office. The process transformed his understanding of programming. The walls of the large room were covered with clipboards hanging on hooks. Colored sheets were used to identify time-sensitive assemblies. Written manuals defined the procedures for movement of the clipboards on the walls. In many respects, it resembled the process he guided the Cub Scouts through a few years later. It was a program, and there was no way that the production automation system would be able to replicate its level of sophistication. When presented with the assessment, the president chose to force the scheduling team to use the new software, and production collapsed.

Fundamentally, software is just manipulations of zeros and ones. It is useful only to the extent that we can use the zeros and ones to represent the real world. For most of the history of computing, that meant that the principle role of the software developer was to translate the language of the human expert into terms that could be executed by a computer. When that knowledge was packaged and distributed, it meant that every local expert was replaced by a world’s expert, improving the decisions made by others, who then commissioned software solutions that stimulated improvements elsewhere, all at an ever increasing pace that frustrated the attempts by managers to actually control the end result.

What Karl realized in the scheduling office was that programs exist all around us. Some of them we call “habits”, others we call “software”, others we call “symphonic scores.” To realize the intentions of the people that commission the creation of the program, the programmer must describe the actions to be taken in terms understood by the executor of the program – whether a mechanic, a computer or a musician. What is common to each of those situations, however, is the fact that the program only specifies a pattern of execution. A spark plug can be changed in a driveway or in a repair shop. Microsoft Word can be used to write a love letter or a subpoena. A symphony can be performed on period or modern instruments. The result in each case may be subtly or grossly different, but the pattern of the execution is the same.

And there was no good way of representing such patterns. It wasn’t science, and it wasn’t engineering. “Programming” was as good a word as any to use.

Rude is Not the New ‘PC’

With the Trump campaign only now announcing that they are going to bring in experts to craft policy positions, it is easy to fall into the cant adopted by Hillary Clinton. In a press briefing in New Hampshire today, Clinton observed that “Megan is a strong woman and can take care of herself,” and dismissed the Trump candidacy as “entertainment.”

But it’s far, far more than that. Trump stood up at the Fox debate and threw his money and ego around. The other candidates came off as a coterie in short pants, each one talking over the other as they sniped in the background. The goal was to make Trump sound silly, but it was obvious who had the strongest personality on the stage.

The image that comes most clearly to mind when I think of that scene is a photo of Hitler and his high command that my family came across in the effects of my grandmother’s last husband, who served on Eisenhower’s staff at the end of World War II. In the photo, the warriors are ranged behind Hitler in combat dress, but none of them looked half as tough as the Fuhrer in shorts. Despite the pout and over-coiffed hair, the same was true of Trump on the debate platform.

I’m not going to suggest that Trump is another Hitler. The man seems affable, and genuinely concerned about the “little people.” But he is obviously unwilling or unable to recognize that the jibes and threats he bandies about on the stage are a dangerous model. Every time Trump shoots off his mouth, a team of lawyers scurries in the background, evaluating whether they have leverage to impose his will on adversaries (as appears to have occurred at Fox News today through Roger Ailes), or whether to backtrack, turn on the charm, and make nice.

I don’t think that Trump understands that when he tells a woman “I’m nice to people that are nice to me,” many women in America hear echoes of an abusive boss engaged in inappropriate groping. And of civil servants, covered by a blanket assessment of idiocy, I can’t help but remember Newt Gingrich and his anti-government rhetoric during the Clinton era, rhetoric that morphed into ridiculous tales of “UN Black Helicopters” preparing to enforce a “New World Order,” whipping up hysteria and paranoia among civilian militias that peaked with McVeigh’s truck bomb murder of the children at the Murrah Building day-care center in Oklahoma City.

And as for the claim that illegal immigrants are “rapists” – we’ve heard things like that about minorities before. What was the epithet? “Christ killers?”

Trump is unsuitable for the Oval Office because he doesn’t realize that the President fires the imagination of the public with an authority presumed to be vetted by the federal bureaucracy. People without his sense of nuance and balance are going to emulate his conduct and manner of speaking. Rude men will run with his claims of oppression under the doctrine of “political correctness,” and be emboldened by his use of raw power to intimidate others. They may not have his resources, but they will emulate his conduct, and hurt a lot of other people in the process.

So, no, we shouldn’t consider this entertainment. It is dangerous. Trump needs to learn to control his mouth, or get off the political stage.

Love Works Posted

Just a note that I’ve uploaded the rest of Love Works. Click on the page link on the banner. The post explains the delay.

The document was originally created in OpenOffice, and the images acquired a grey background in the port to Word. At some point I’ll fire up my old laptop and break it apart in OpenOffice. If there’s an immediate need, let me know and I’ll push it up on the priority list.