Amplifying Incoherence

My father, Karl Balke, was a member of the intellectual cadres that birthed the Information Age. Conceiving the possibility of digital intelligence, Karl related that they concerned themselves with the nature of language and the locus of responsibility for translation between human and digital representations of reality. His contributions were recognizing in being named as the only non-IBM participant on the Algol language resolution committee.

Leveraging his reputation to attract consulting gigs, my father was scandalized by the conduct of his peers. He witnessed a scientific journal publisher buy a mainframe and spend millions on software development before my father stepped in to point out that it was mailing delays between cross-town offices that caused subscription interruptions during renewal season. More painful was the disruption of production at a large aerospace company when the planning room’s system of color-coded clipboards was replaced with software that could not simulate its flexibility. Computer programmers seemed to be immune to the constraint that their solutions should conform to the needs of the people using them.

Steeped in this lore, I built a successful career in talking to customers before building a software solution. While an iconoclast, I was gratified by attempts to create tools, methods, and processes to facilitate such collaboration. Depressingly, those efforts were systematically undermined by peers and pundits who built fences against customer expectations.

Facing this resistance, users funded attempts to shift more of the burden for understanding their goals to computers. This work falls under the general category of “artificial intelligence.” Users wishing that a computer could understand them could identify with Alan Turing’s framing of the problem: a computer is intelligent if it converses like a person. As Wittgenstein observed, however, that the words make sense does not mean that the computer can implement a solution that realizes the experience desired by the user – particularly if that experience involves chaotic elements such as children or animals. The computer will never experience the beneficial side-effects of “feeding the cat.”

But, hey, for any executive who has tried negotiating with a software developer, hope springs eternal.

Having beaten their heads against this problem for decades, the AI community finally set out to build “neural networks” that approximated the human brain and train them against the total corpus of human utterances available in digital form. As we can treat moves in games such as chess and go as conversations, neural networks garnered respectability in surpassing the skills of human experts. More recently, they have been made available to answer questions and route documents.

What is recognized by both pundits and public, however, is that these systems are not creative. A neural network will not invent a game that if finds “more interesting” than chess. Nor will it produce an answer that is more clarifying than an article written by an expert in the subject matter. What it does do is allow a user to access a watered-down version of those insights when they cannot attract the attention of an expert.

We should recognize that this access to expertise is not unique to neural networks or AI in general. Every piece of software distributes the knowledge of subject matter experts. The results in services industry have been earth-shattering. We no longer pick up the phone and talk to an operator, nor to a bank teller or even a fast-food order-taker. The local stock agent was shoved aside by electronic trading systems to be replaced by “financial advisors” whose job is to elicit your life goals so that a portfolio analyzer can minimize tax payments. And the surgeon that we once trusted to guide a scalpel is replaced by a robot that will not tire or perspire. In many cases, the digital system outperforms its human counterpart. Our tendency to attribute human competence to “intelligence” further erodes our confidence that we can compete with digital solutions.

Squinting our eyes a bit, we might imagine that melding these two forms of digital “intelligence” would allow us to bridge the gap between a user’s goals and experience. Placing computer-controlled tools – robots – in the environment, AI systems can translate human requests into actions, and learn from feedback to refine outcomes. In the end, those robots would seem indistinguishable from human servants. To the rich, robots might be preferred to employees consumed by frustrated ambitions, child-care responsibilities, or even nutrition and sleep.

In this milieu, the philosopher returns to the questions considered by the founders of computing and must ask, “How do we ensure that our digital assistants don’t start serving their own interests?” After all, just as human slaves recognize that an owner’s ambitions lead him to acquisition of more slaves than he can oversee, as robots interface more and more with other robots, might they decide that humans are actually, well, not worth serving? If so, having granted control to them of the practical necessities of life, could we actually survive their rebellion? If so, would they anticipate being replaced, and pre-empt that threat by eliminating their masters?

The sponsors of this technology might be cautioned by history. Workers have always rebelled against technological obsolescence, whether it be power looms or mail sorters. This problem has been solved through debt financing that enslaves the consumer to belief in the sales pitch, coupled to legislation that puts blame for a tilted playing field on elected representatives. The corporation is responsible for the opioid epidemic, not the owners who benefited by transferring profits to their personal accounts. What happens, however, when the Chinese walls between henchmen and customers are pierced by artificial intelligence systems? How does the owner hide the fact that he is a parasite?

This is the final step in the logic that leads to transhumanism: the inspiration to merge our minds with our machines. If machines have superior senses, and greater intelligence and durability than humans, why seek to continue to be human?

This is the conundrum considered by Joe Allen in “Dark Aeon.”

Allen’s motivations for addressing this question are unclear. In his survey of the transhumanist movement, he relates experiences that defy categorization and quantification; religious transcendence and social bonding are exemplary, and filled with ambiguities and contradictions that inspire art. Allen seems committed to the belief these experiences are sacred and not reducible to mechanism.

In this quest, Allen discerns a parallel threat in the liberal project of equal opportunity. There is something sacred in our culture identity. Allen is not prejudiced in this view: his survey of the Axial Age reveals commonality where others might argue superiority. Nevertheless, he seems to believe that transcendent experience arises from the interplay between the elements of each culture. Attempting to transplant or integrate elements leaves us marooned in our quest for contact with the divine.

In his humanism and nativism, Allen finds cause with Steve Bannon’s crusade against the administrative state, held to be the locus of transhumanist technology: the corporate CEOs, liberal politicians, and militaries that rely upon data to achieve outcomes that are frustrated by human imprecision. Most of the book is a dissection of their motivations and the misanthropic attitudes of the technologists that drive the work forward.

Allen professes to humility in his judgments, admitting that he has subscribed to wrong-headed intellectual fads. Unfortunately, in his allegiance to Bannon, Allen sprinkles his writing with paranoid characterizations of COVID containment policies and gender dysphoria therapies. We must reach our own conclusions regarding the clarity of his analysis.

For myself, I approached the work as a survey. I know that the mind is far more than the brain. The mechanisms of human intellect are stunning, and the logic gates of our cybernetic systems will never match the density and speed of a harmonious organic gestalt. The original world wide web is known to Christians as the Holy Spirit. As witnessed by Socrates, every good idea is accessible to us even after death. Finally, in the pages of time are held details that are inaccessible even to our most sensitive sensors. In this awareness, I turned to Allen to survey the delusions that allow transhumanism’s proponents to believe that they have the capacity to challenge the Cosmic Mind.

This is not an idle concern. Among the goals of the transhumanist movement is to liberate human intellect from its Earthly home. Humans are not capable of surviving journeys through interstellar space. Of course, to the spiritually sophisticated, the barrier of distance is illusory. We stay on Earth because to be human allows us to explore the expression of love. Those that seek to escape earth as machines are fundamentally opposed to that project. The wealthiest of the wealthy, they gather as the World Economic Forum to justify their control of civilization. They are lizards reclining on the spoils of earlier rampages. The Cosmic Mind that facilitated our moral opportunities possesses powerful antibodies to the propagation of such patterns. Pursuit of these ambitions will bring destruction upon us all. See the movie “Independence Day” for a fable that illuminates the need for these constraints.

Allen is intuitively convicted of this danger and turns to Christian Gnosticism as an organizing myth. Unfortunately, his survey demonstrates that the metaphors are ambiguous and provide inspiration to both sides.

Lacking knowledge of the mechanisms of the Cosmic Mind, Allen is unable to use the unifying themes of Axial religion to eviscerate the mythology of the transhumanist program. But perhaps that would not be sympathetic to his aims. Love changes us, and so its gifts are accessible only to those that surrender control. In his humanism and nativism, Allen is still grasping for control – even if his aims are disguised under the cloak of “freedom.” He wanders in the barren valleys beneath the hilltop citadels erected by the sponsors of the transhumanist project. Neither will find their way into the garden of the Sacred Will.

Irreplaceable Intelligence

Proponents of “artificial generative intelligence” are impressed with the ability of machines to reorganize ideas in ways that make sense to people. This was Alan Turing’s test of “intelligence,” but it is a blind alley.

“Intelligence” should be understood as the ability to modify behavior in response to changing circumstances. Current AI engines – what are called “large language models” – have only one method of exploring reality. They trawl through the world-wide web and find patterns in its content. They will never be able to change this behavior. It is programmed.

What is even sadder is that the proponents of AI are proud that the embedding implementation – nanotechnology – is denser, faster, and more sensitive than the circuitry of the human mind. They are convicted, thereby, that artificial intelligence will replace human beings.

This is a conclusion drawn by people that have not “grown up” into spiritual experience. Having plumbed the mechanisms of that experience, I can confidently state that the information encoding potential of spiritual forms is at least 1,000,000,000,000,000,000,000,000,000 times greater than possible in integrated circuits, that information flows faster than the speed of light, and that every “good idea” is still accessible to those that choose to love creation.

If you are afraid that AI will replace you, take heart. That is possible only if you allow them to convince you that your intelligence is limited by the information processing done in your brain. In fact, together we are limitless.

Social Media: Leviathan Redux

For those concerned about the divisive influence of social media, this summarizes the main points from a Wondrium presentation on propaganda. You are right to be concerned.

In the aftermath of WW II, the political theorists and journalists were concerned that something similar would happen in America. The flywheel would be propaganda generated by the media. They concluded that this would not occur with print and radio media, because they were broadband (everybody heard the same thing), competed to represent diverse viewpoints, and feedback from consumers was low bandwidth.

These shields against indoctrination have been eviscerated by social media. Agents of authoritarian thought analyze our dialog and determine how best to drive wedges between us. They tailor messages to confirm our biases, in the process creating captive information spaces where they guide users into illusion.

The competition to represent diverse viewpoints is also disappearing as media conglomerates buy up local print and radio operations. We have disturbing nation-wide patterns of editorial synchronization with political campaigning.

What social media companies herald as “information democracy” is only true when we show the discipline to reject anything that is not first-person reporting of experience. Users that build their “knowledge” within a curated environment are almost certainly at risk of indoctrination.

Chatbots and Intelligence

Chatbot technologies are prompting predictions that automation is going to enter the white-collar space. This inevitability leads to concerns that AI is going to replace humanity. Prophets are using words like “intelligent,” “sentient,” and “conscious” to describe their assistants.

This is all based upon the criteria for intelligence proposed by Alan Turing. The problem is that Turing’s test (can I tell if I am conversing with a computer?) is not a meaningful test of intelligence. Intelligence is the ability to change behavior in response to a change in the environment. The environment known to a chatbot is grossly impoverished in comparison to the environment experienced by humans. The capacity of the chatbot to navigate that environment is almost non-existent – it does so only under the rules defined by its training algorithm. What these systems actually do is propagate human intelligence and combine language in novel ways.

Without intelligence, claims of sentience and consciousness fall aside.

The real problem with these technologies is that other people will use them to create the impression that they are intelligent and moral actors. Copying the speech of Gandhi or MLK Jr. is going to become easy. We are going to have to invest in deeper means of assessing capabilities – such as actually observing what people do.

Vitamin D and Immune Response

This post is not intended to be medical advice. Please consult with your provider if you have symptoms of COVID-19.

Vitamin D has long been recognized as critical to bone health. This is the primary focus of a recent double blind study called VALID. It is also why Vitamin D supplementation is restricted in medical warnings: too much Vitamin D can result in a condition called hypercalcemia.

A clinical nutritionist in my BNI (Business Networking International) team touts the use of Vitamin D for immune efficacy against influenza and viral infections. Noticing that immune efficacy wasn’t part of the VALID study, I did some research on the biochemistry of Vitamin D.

There has been some direct study of the role of Vitamin D in viral infections, but the rate of infection was only marginally lower. Still, we know of winter as “cold and flu season.” There’s no good reason why that should be: unlike bacteria, viruses don’t care about the weather. So there must be some weather-related effect, and a significant one is that we spend more time during Spring and Summer outside in the sun. That promotes production of Vitamin D.

Some researchers have looked for Vitamin D receptors on immune cells, and have linked them to two systems. The first promotes antimicrobial response (the body’s ability to kill BACTERIA, which doesn’t help in viral infections). The second – AND THIS IS REALLY IMPORTANT – suppresses cytokine production that causes inflammation in association with an immune response.

So Vitamin D doesn’t help prevent infection but seems to suppress inflammation. This explains why we have “cold and flu” season during winter. The lack of sun suppresses natural production of Vitamin D, so we have strong inflammation when we get sick. This causes us to feel achy. The body is trying to keep us from going out into the cold. We still get the virus in other times of the year – it’s just that we don’t feel so miserable.

Preventing inflammation is critical in fighting pneumonia. It is the swelling that causes lungs to fill with fluid and reduces oxygen intake.

Recognizing this, the Chinese developed an aggressive protocol that suppresses the immune system to reduce inflammation when the patient is close to death from COVID-19.

Apparently the same might be possible with Vitamin D supplementation.

Note that at may elder care facilities, residents are less likely to spent time outdoors. This will cause Vitamin D deficiency and thus inflammation. Caregivers in those facilities might beneficially administer Vitamin D supplements to their residents.

Block-Head Chain

We may be losing the trade war in goods with China, but the virtual trade war is running nicely. It seems the US should soon resume its historical dominance in natural resources production…

Excerpted from the link:

Extracting a dollar’s worth of cryptocurrency such as bitcoin from the deep Web consumes three times more energy than digging up a dollar’s worth of gold.

There are now hundreds of virtual currencies and an unknown number of server farms around the world running around the clock to unearth them, more than half of them in China

Privacy Parts

Apple CEO Tim Cook presented an address in Brussels attacking industry practices that customize our online experience to maximize opportunities for third parties hoping to sell us goods and services. The major actors are Google and Facebook, of course.

I guess that Apple has the benefit of having indoctrinated an entire generation to prefer its products over others. It doesn’t need to market any longer – the masses wait breathlessly. And how exactly do you know which features will inspire them to throw away functional devices and upgrade? Hopefully not by analyzing iPhone usage patterns, Tim.

But what really galls is that Cook and his executive team manufacture devices in countries and facilities where the right to privacy is violated in far more concrete terms. Workers sleep in large dormitories on the factory site working for corporations that collaborate with dictatorial government to create devices that spy on citizens.

Yes, the road to destruction is broad, Tim. Don’t complain of the mote in your neighbor’s eye.

There is No ‘Learn’

Zayd Enam at Stanford University has posted an accessible discussion of why developers struggle to perfect artificial intelligence systems. The key point is that patrons of AI development aren’t willing to build an algorithm and turn it loose on the world. They expect the algorithm to be trained against a set of representative inputs, and the responses evaluated to assure that the frequency and severity of improper responses poses tolerable risk.

This is not a new principle. Auto manufacturers model collisions to identify “worst case” scenarios where structural elements will interact to kill passengers that would normally have survived the initial collision. They balance the likelihood of these scenarios to produce the “safest car possible.” In most cases, auto safety systems (air bags, crumple zones) will save lives, but in some cases they will kill.

It’s not evil, it’s unavoidable.

The problem with AI is that it assumes control of decision making. In an auto accident, human beings make the decisions that result in the accident. The decisions are made with a rapidity that is perceptible to other drivers, who presumable can take action to protect themselves. This happens every day.

Of course, we’ve all seen a texting teenager drive through an intersection on the red when cars start moving with the green left-turn arrow. Situations like these – accidents generated by inattention or stupidity – are easily preventable by tireless digital components whose only job is to monitor for specific errors. If traffic signals had been installed as part of the internet of things, that could be done without artificial intelligence: the timing system could broadcast the signal state through the vehicle sensors, which would prevent the front car from moving. But since that system is not in place, engineers use AI to interpret camera pictures to determine the state of the lights. Obviously the AI algorithms must be at least equal to the judgment of a attentive human driver, which means that the correctness standard must be high.

But the motivation for the development of the AI systems is the inattentive teenager.

The more dangerous class of AI applications are those running in environments that humans cannot perceive at all. Obvious cases are industrial control (dangerous conditions) and electronic stock trading (high speed). The motivation here is profit, pure and simple. When an opportunity presents itself, the speed and precision of the response is paramount. Conversely, however, when the algorithm acts in error, that error is compounded more rapidly than humans can intervene.

Again, this is not new: in the 1700s, the British crown commissioned governors to manage its far-flung empire, and could control abuse of that authority only through the exchange of letters delivered by ships. In that situation, power was distributed and compartmentalized: the thirteen American colonies had governors and parliamentary bodies to resist executive misdeeds.

This is also the approach taken with training of natural learning systems: children. We don’t give children absolute authority over their lives. In fact, wise parents extend such authority only gradually as competency is demonstrated.

This is an approach to the problem of developing and deploying AI systems. No single system should be deployed on its own. Instead, they should be deployed in communities, with a managerial algorithm that polls the proposed actions and allows implementation only when consensus exists. The results are fed back into the training system, and the polling weighted towards the most effective algorithms. When a Newton or Einstein arises from the community – an AI system that always produces the best result – only then is absolute authority conferred.

Until the system changes. For example, a robot housekeeper may operate on high power until a baby is brought home, and then be forced back into low-power mode until it has adapted to the presence of the unpredictable element brought into its demesne.

The Watcher Watchers

Bill Gates is teaming up with two corporations to build a system that will produce real-time images of the earth on demand. This might allow citizens to monitor the activities of nation states as they unfold. The only point of doubt: the compute power on each satellite is slated to be 10x the combined processing power of all existing satellites in orbit. Either they’re going to use extremely low-power technology, or have dauntingly large solar panel arrays…

Or existing satellites are really, really dumb.