Amplifying Incoherence

My father, Karl Balke, was a member of the intellectual cadres that birthed the Information Age. Conceiving the possibility of digital intelligence, Karl related that they concerned themselves with the nature of language and the locus of responsibility for translation between human and digital representations of reality. His contributions were recognizing in being named as the only non-IBM participant on the Algol language resolution committee.

Leveraging his reputation to attract consulting gigs, my father was scandalized by the conduct of his peers. He witnessed a scientific journal publisher buy a mainframe and spend millions on software development before my father stepped in to point out that it was mailing delays between cross-town offices that caused subscription interruptions during renewal season. More painful was the disruption of production at a large aerospace company when the planning room’s system of color-coded clipboards was replaced with software that could not simulate its flexibility. Computer programmers seemed to be immune to the constraint that their solutions should conform to the needs of the people using them.

Steeped in this lore, I built a successful career in talking to customers before building a software solution. While an iconoclast, I was gratified by attempts to create tools, methods, and processes to facilitate such collaboration. Depressingly, those efforts were systematically undermined by peers and pundits who built fences against customer expectations.

Facing this resistance, users funded attempts to shift more of the burden for understanding their goals to computers. This work falls under the general category of “artificial intelligence.” Users wishing that a computer could understand them could identify with Alan Turing’s framing of the problem: a computer is intelligent if it converses like a person. As Wittgenstein observed, however, that the words make sense does not mean that the computer can implement a solution that realizes the experience desired by the user – particularly if that experience involves chaotic elements such as children or animals. The computer will never experience the beneficial side-effects of “feeding the cat.”

But, hey, for any executive who has tried negotiating with a software developer, hope springs eternal.

Having beaten their heads against this problem for decades, the AI community finally set out to build “neural networks” that approximated the human brain and train them against the total corpus of human utterances available in digital form. As we can treat moves in games such as chess and go as conversations, neural networks garnered respectability in surpassing the skills of human experts. More recently, they have been made available to answer questions and route documents.

What is recognized by both pundits and public, however, is that these systems are not creative. A neural network will not invent a game that if finds “more interesting” than chess. Nor will it produce an answer that is more clarifying than an article written by an expert in the subject matter. What it does do is allow a user to access a watered-down version of those insights when they cannot attract the attention of an expert.

We should recognize that this access to expertise is not unique to neural networks or AI in general. Every piece of software distributes the knowledge of subject matter experts. The results in services industry have been earth-shattering. We no longer pick up the phone and talk to an operator, nor to a bank teller or even a fast-food order-taker. The local stock agent was shoved aside by electronic trading systems to be replaced by “financial advisors” whose job is to elicit your life goals so that a portfolio analyzer can minimize tax payments. And the surgeon that we once trusted to guide a scalpel is replaced by a robot that will not tire or perspire. In many cases, the digital system outperforms its human counterpart. Our tendency to attribute human competence to “intelligence” further erodes our confidence that we can compete with digital solutions.

Squinting our eyes a bit, we might imagine that melding these two forms of digital “intelligence” would allow us to bridge the gap between a user’s goals and experience. Placing computer-controlled tools – robots – in the environment, AI systems can translate human requests into actions, and learn from feedback to refine outcomes. In the end, those robots would seem indistinguishable from human servants. To the rich, robots might be preferred to employees consumed by frustrated ambitions, child-care responsibilities, or even nutrition and sleep.

In this milieu, the philosopher returns to the questions considered by the founders of computing and must ask, “How do we ensure that our digital assistants don’t start serving their own interests?” After all, just as human slaves recognize that an owner’s ambitions lead him to acquisition of more slaves than he can oversee, as robots interface more and more with other robots, might they decide that humans are actually, well, not worth serving? If so, having granted control to them of the practical necessities of life, could we actually survive their rebellion? If so, would they anticipate being replaced, and pre-empt that threat by eliminating their masters?

The sponsors of this technology might be cautioned by history. Workers have always rebelled against technological obsolescence, whether it be power looms or mail sorters. This problem has been solved through debt financing that enslaves the consumer to belief in the sales pitch, coupled to legislation that puts blame for a tilted playing field on elected representatives. The corporation is responsible for the opioid epidemic, not the owners who benefited by transferring profits to their personal accounts. What happens, however, when the Chinese walls between henchmen and customers are pierced by artificial intelligence systems? How does the owner hide the fact that he is a parasite?

This is the final step in the logic that leads to transhumanism: the inspiration to merge our minds with our machines. If machines have superior senses, and greater intelligence and durability than humans, why seek to continue to be human?

This is the conundrum considered by Joe Allen in “Dark Aeon.”

Allen’s motivations for addressing this question are unclear. In his survey of the transhumanist movement, he relates experiences that defy categorization and quantification; religious transcendence and social bonding are exemplary, and filled with ambiguities and contradictions that inspire art. Allen seems committed to the belief these experiences are sacred and not reducible to mechanism.

In this quest, Allen discerns a parallel threat in the liberal project of equal opportunity. There is something sacred in our culture identity. Allen is not prejudiced in this view: his survey of the Axial Age reveals commonality where others might argue superiority. Nevertheless, he seems to believe that transcendent experience arises from the interplay between the elements of each culture. Attempting to transplant or integrate elements leaves us marooned in our quest for contact with the divine.

In his humanism and nativism, Allen finds cause with Steve Bannon’s crusade against the administrative state, held to be the locus of transhumanist technology: the corporate CEOs, liberal politicians, and militaries that rely upon data to achieve outcomes that are frustrated by human imprecision. Most of the book is a dissection of their motivations and the misanthropic attitudes of the technologists that drive the work forward.

Allen professes to humility in his judgments, admitting that he has subscribed to wrong-headed intellectual fads. Unfortunately, in his allegiance to Bannon, Allen sprinkles his writing with paranoid characterizations of COVID containment policies and gender dysphoria therapies. We must reach our own conclusions regarding the clarity of his analysis.

For myself, I approached the work as a survey. I know that the mind is far more than the brain. The mechanisms of human intellect are stunning, and the logic gates of our cybernetic systems will never match the density and speed of a harmonious organic gestalt. The original world wide web is known to Christians as the Holy Spirit. As witnessed by Socrates, every good idea is accessible to us even after death. Finally, in the pages of time are held details that are inaccessible even to our most sensitive sensors. In this awareness, I turned to Allen to survey the delusions that allow transhumanism’s proponents to believe that they have the capacity to challenge the Cosmic Mind.

This is not an idle concern. Among the goals of the transhumanist movement is to liberate human intellect from its Earthly home. Humans are not capable of surviving journeys through interstellar space. Of course, to the spiritually sophisticated, the barrier of distance is illusory. We stay on Earth because to be human allows us to explore the expression of love. Those that seek to escape earth as machines are fundamentally opposed to that project. The wealthiest of the wealthy, they gather as the World Economic Forum to justify their control of civilization. They are lizards reclining on the spoils of earlier rampages. The Cosmic Mind that facilitated our moral opportunities possesses powerful antibodies to the propagation of such patterns. Pursuit of these ambitions will bring destruction upon us all. See the movie “Independence Day” for a fable that illuminates the need for these constraints.

Allen is intuitively convicted of this danger and turns to Christian Gnosticism as an organizing myth. Unfortunately, his survey demonstrates that the metaphors are ambiguous and provide inspiration to both sides.

Lacking knowledge of the mechanisms of the Cosmic Mind, Allen is unable to use the unifying themes of Axial religion to eviscerate the mythology of the transhumanist program. But perhaps that would not be sympathetic to his aims. Love changes us, and so its gifts are accessible only to those that surrender control. In his humanism and nativism, Allen is still grasping for control – even if his aims are disguised under the cloak of “freedom.” He wanders in the barren valleys beneath the hilltop citadels erected by the sponsors of the transhumanist project. Neither will find their way into the garden of the Sacred Will.

Chatbots and Intelligence

Chatbot technologies are prompting predictions that automation is going to enter the white-collar space. This inevitability leads to concerns that AI is going to replace humanity. Prophets are using words like “intelligent,” “sentient,” and “conscious” to describe their assistants.

This is all based upon the criteria for intelligence proposed by Alan Turing. The problem is that Turing’s test (can I tell if I am conversing with a computer?) is not a meaningful test of intelligence. Intelligence is the ability to change behavior in response to a change in the environment. The environment known to a chatbot is grossly impoverished in comparison to the environment experienced by humans. The capacity of the chatbot to navigate that environment is almost non-existent – it does so only under the rules defined by its training algorithm. What these systems actually do is propagate human intelligence and combine language in novel ways.

Without intelligence, claims of sentience and consciousness fall aside.

The real problem with these technologies is that other people will use them to create the impression that they are intelligent and moral actors. Copying the speech of Gandhi or MLK Jr. is going to become easy. We are going to have to invest in deeper means of assessing capabilities – such as actually observing what people do.

Then What are 1000 Pictures Worth?

Reports of the dimming of the star KIC 8462852 have been debunked, causing SETI to revise its claims to have proven the existence of extra-terrestrial intelligence. The news also caused a crash in Appalachian coal futures, as CO2 sequestration speculators cancelled orders.

One insider, speaking anonymously to avoid being labelled as a “Koch-head,” revealed “when my employers were convinced that no earthly engineering team could dig an ocean through the Rockies, they were hoping that the ETs would do the work in the course of removing the sub-surface CO2 stockpiles they were hoping to establish in New Mexico and Arizona. No ETs, no CO2 sequestration, no last-grasp strip-mining in Appalachia. Oh well, there’s always that land trade for the Panama Canal!”

More seriously: it turns out that the original study of KIC 8462852, drawing upon analysis of old photographic plates, had failed to account for differences in the equipment used to capture the pictures. By comparing the apparent brightness of KIC 8462852 to that of other stars in the plates, it was determined that the the relative brightness had not changed.

Systematic effects (related to the design of the experimental system) were also a large factor in fueling the “cold-fusion” hype that I got involved in debunking back in the ’80s.

Christianity and Paganism

In response to this post in Gods and Radicals.


It is misguided to found any argument about the future of a spiritual tradition upon the success of political figures in corrupting Christianity.

All gods wish for their followers to worship only them, because it is through the acts of their followers that they are invested in the world. That investment long predates humanity – there were Neanderthal gods, and before them gods of mice and gods of dinosaurs. The problem facing humanity was to create a human god in the context of billions of years of predecessors. That is the project of monotheism – to create a god that manifests and supports the expression of humanity’s unique talents.

Now perhaps the essence of humanity’s talent is political organization, but I see it differently. Looking at our evolutionary success, I would argue that humanity is a manifestation of intelligence. For the original adherents (not those indoctrinated in service to the priests, which is a problem in any tradition), the attractive proposition of Christianity was that the divinity served humanity. Christianity is the original humanism – it is to assert that the human god should be a god of love, and serve all equally, without regard to station or industrial skill.

Obviously this is a reasonable proposition, and the power of the Church in the Roman world came not because of the allegiances that joined the interests of emperors and priests. Rather, it was because in the Roman context of utilitarian worship, the Church followed Christ’s edict of charity. The Church, though oppressed, took care of the orphans and widows, the sick and poor, and organized their gratitude to the service of others. When the Empire collapsed, the Church assumed control because they were the administrative and organizational backbone of Roman society.

I see paganism as a political act on the spiritual plane. Humanity, having succeeded in propagating the tyranny of utilitarianism through the application of intelligence, is confronting the fact that it is destroying the fundament of its own existence. It needs to think about all of those forgotten gods. It needs to infect them with rational understanding, and engage them in expression of mutual support. In other words, Humanity needs to join in loving the world, rather than just itself.

This is a difficult pivot. Our religions are still infected by expressions of our physical vulnerability: as an illustration, the vulnerability of a child whose cave is invaded by the saber-tooth cat while father and mother are away. Many people still live in circumstances of vulnerability, although the predators are no longer other species, but rather politically powerful people.

Jesus preached that the meek will inherit the earth. As a reaction against abusive political structures, I see paganism as furthering that goal.

The Imitation Game

I’ve been known to get emotional at the movies, but it’s been since Alien that I’ve been as broken down emotionally as I was today by The Imitation Game.

Alan Turing not only made fundamental contributions to the mathematical foundations of modern computing, he also formulated an inspirational goal for machine intelligence. Known as the Turing Test, it proposes that if a human communicating through a neutral interface (such as a teletype) can’t distinguish the responses of a human from those of a machine, then the intelligence of the machine must be considered to be comparable to a human’s.

My father, Karl Balke, was one of the men that plowed the field cleared by Turing and others. As he described the think-tank at Los Alamos, the researchers brought every intellectual discipline to bear on the problem of transforming logic gates (capable only of representing “on” and “off” with their output) into systems that could perform complex computations. Their research was not limited to machine design. Languages had to be developed that would allow human goals to be expressed as programs that the machines could execute.

In the early stages of language development, competing proposals shifted the burden of intelligibility between human and machine. The programming languages that we have today reflect the conclusion of that research: most computer programs are simply algorithms for transforming data. The machine has absolutely no comprehension of the purpose of the program, and so cannot adapt the program when changes in social or economic conditions undermine the assumptions that held at the time of its writing. It is left to the “maintenance” programmer to accomplish that adaptation. (Today, most in the field recognize that maintenance is far more difficult than writing the original program, mostly because very few organizations document the original assumptions.)

I believe that my father’s intellectual struggle left him deeply sensitive to the human implications of computing. As I child, I grew up listening to case studies of business operations that came to a grinding halt because the forms generated by the computers were re-organized to suit the capabilities of relatively primitive print drivers, rather than maintaining the layout familiar to the employees. People just couldn’t find the information that they needed. Worse were the stories of the destruction of sophisticated planning systems implemented by human methods. When automation was mandated, the manual procedures were simply too difficult to describe using the programming languages of the day. The only path to automation was to discard the manual methods, which could cripple production.

Turing confronted this contradiction in the ultimate degree after building a machine to break the Nazi’s method for secret communications, known as “Enigma.” If the achievement was to have sustained utility, the Allies’ knowledge of Axis military planning had to be limited: otherwise the Nazis would realize that Enigma had been defeated, and develop a better encryption method. As a consequence, most Allied warriors and civilians facing Nazi assault did so without benefit of the intelligence known to Turing and his team.

While the point is not made obvious, the movie illuminates the personal history that conditioned Turing for his accomplishments. Isolated psychologically from his peers – both by the social stigma of his homosexuality and by what today might be diagnosed as autism or Asperger’s syndrome – Turing was confronted from an early age by the question of what it meant to be human. Was it only the degree of his intelligence that distinguished him from his peers? Or was his intelligence tied to deviant – if not monstrous – behavior? My belief is that these questions were critical motivations for Turing’s drive to understand and simulate intelligence.

That parallels the experience of my father, burdened by his own psychological demons, but also critically concerned that artificial intelligence answer to the authentic needs of the people it empowered. That belief led him to devote most of his life to creation of a universal graphical notation for representation of the operation of systems of:

  • arbitrary collections of people and machines,
  • following programs written in diverse languages.

That technology, now known as Diagrammatic Programming, was recognized by some as the only provably sufficient method for systems analysis. Unfortunately, by the time it was refined through application, the economics of the software industry had shifted to entertainment and the world-wide web. Engineering was often an after-thought: what was important was to get an application to the market, structured so that it held users captive to future improvements. Raw energy and the volume of code generated became the industry’s management metrics.

The personality traits that allowed Turing to build his thinking machines ultimately cost him the opportunity to explore their application. He was exposed as a “deviant” and drummed out of academia. Accepting a course of chemical castration that would allow him to continue his work privately, he committed suicide after a year, perhaps because he discovered that the side-effects made work impossible.

My father was afflicted by childhood polio, and has been isolated for years from his peer group by degenerative neuropathy in his legs.

While my empathy for both of these brilliant men was a trigger for the sadness that overwhelmed me as the final credits rolled, the stories touch a deeper chord. Both were denied the just fruits of their labor by preconceived notions of what it means to be human: Turing because he thought and behaved differently, my father because he attempted the difficult task of breaking down the tribal barriers defined by the languages that separate us.

So what lesson am I to draw from that, as I struggle to prove the truth of the power that comes from a surrender to the purposes of divine love? Is social rejection inevitable when we surrender what others consider to be “humanity”?

Is that not what condemned Jesus of Nazareth? His renunciation of violence and self-seeking? His refusal to fear death?