Chatbots and Intelligence

Chatbot technologies are prompting predictions that automation is going to enter the white-collar space. This inevitability leads to concerns that AI is going to replace humanity. Prophets are using words like “intelligent,” “sentient,” and “conscious” to describe their assistants.

This is all based upon the criteria for intelligence proposed by Alan Turing. The problem is that Turing’s test (can I tell if I am conversing with a computer?) is not a meaningful test of intelligence. Intelligence is the ability to change behavior in response to a change in the environment. The environment known to a chatbot is grossly impoverished in comparison to the environment experienced by humans. The capacity of the chatbot to navigate that environment is almost non-existent – it does so only under the rules defined by its training algorithm. What these systems actually do is propagate human intelligence and combine language in novel ways.

Without intelligence, claims of sentience and consciousness fall aside.

The real problem with these technologies is that other people will use them to create the impression that they are intelligent and moral actors. Copying the speech of Gandhi or MLK Jr. is going to become easy. We are going to have to invest in deeper means of assessing capabilities – such as actually observing what people do.

Vitamin D and Immune Response

This post is not intended to be medical advice. Please consult with your provider if you have symptoms of COVID-19.

Vitamin D has long been recognized as critical to bone health. This is the primary focus of a recent double blind study called VALID. It is also why Vitamin D supplementation is restricted in medical warnings: too much Vitamin D can result in a condition called hypercalcemia.

A clinical nutritionist in my BNI (Business Networking International) team touts the use of Vitamin D for immune efficacy against influenza and viral infections. Noticing that immune efficacy wasn’t part of the VALID study, I did some research on the biochemistry of Vitamin D.

There has been some direct study of the role of Vitamin D in viral infections, but the rate of infection was only marginally lower. Still, we know of winter as “cold and flu season.” There’s no good reason why that should be: unlike bacteria, viruses don’t care about the weather. So there must be some weather-related effect, and a significant one is that we spend more time during Spring and Summer outside in the sun. That promotes production of Vitamin D.

Some researchers have looked for Vitamin D receptors on immune cells, and have linked them to two systems. The first promotes antimicrobial response (the body’s ability to kill BACTERIA, which doesn’t help in viral infections). The second – AND THIS IS REALLY IMPORTANT – suppresses cytokine production that causes inflammation in association with an immune response.

So Vitamin D doesn’t help prevent infection but seems to suppress inflammation. This explains why we have “cold and flu” season during winter. The lack of sun suppresses natural production of Vitamin D, so we have strong inflammation when we get sick. This causes us to feel achy. The body is trying to keep us from going out into the cold. We still get the virus in other times of the year – it’s just that we don’t feel so miserable.

Preventing inflammation is critical in fighting pneumonia. It is the swelling that causes lungs to fill with fluid and reduces oxygen intake.

Recognizing this, the Chinese developed an aggressive protocol that suppresses the immune system to reduce inflammation when the patient is close to death from COVID-19.

Apparently the same might be possible with Vitamin D supplementation.

Note that at may elder care facilities, residents are less likely to spent time outdoors. This will cause Vitamin D deficiency and thus inflammation. Caregivers in those facilities might beneficially administer Vitamin D supplements to their residents.

Block-Head Chain

We may be losing the trade war in goods with China, but the virtual trade war is running nicely. It seems the US should soon resume its historical dominance in natural resources production…

Excerpted from the link:

Extracting a dollar’s worth of cryptocurrency such as bitcoin from the deep Web consumes three times more energy than digging up a dollar’s worth of gold.

There are now hundreds of virtual currencies and an unknown number of server farms around the world running around the clock to unearth them, more than half of them in China

Privacy Parts

Apple CEO Tim Cook presented an address in Brussels attacking industry practices that customize our online experience to maximize opportunities for third parties hoping to sell us goods and services. The major actors are Google and Facebook, of course.

I guess that Apple has the benefit of having indoctrinated an entire generation to prefer its products over others. It doesn’t need to market any longer – the masses wait breathlessly. And how exactly do you know which features will inspire them to throw away functional devices and upgrade? Hopefully not by analyzing iPhone usage patterns, Tim.

But what really galls is that Cook and his executive team manufacture devices in countries and facilities where the right to privacy is violated in far more concrete terms. Workers sleep in large dormitories on the factory site working for corporations that collaborate with dictatorial government to create devices that spy on citizens.

Yes, the road to destruction is broad, Tim. Don’t complain of the mote in your neighbor’s eye.

There is No ‘Learn’

Zayd Enam at Stanford University has posted an accessible discussion of why developers struggle to perfect artificial intelligence systems. The key point is that patrons of AI development aren’t willing to build an algorithm and turn it loose on the world. They expect the algorithm to be trained against a set of representative inputs, and the responses evaluated to assure that the frequency and severity of improper responses poses tolerable risk.

This is not a new principle. Auto manufacturers model collisions to identify “worst case” scenarios where structural elements will interact to kill passengers that would normally have survived the initial collision. They balance the likelihood of these scenarios to produce the “safest car possible.” In most cases, auto safety systems (air bags, crumple zones) will save lives, but in some cases they will kill.

It’s not evil, it’s unavoidable.

The problem with AI is that it assumes control of decision making. In an auto accident, human beings make the decisions that result in the accident. The decisions are made with a rapidity that is perceptible to other drivers, who presumable can take action to protect themselves. This happens every day.

Of course, we’ve all seen a texting teenager drive through an intersection on the red when cars start moving with the green left-turn arrow. Situations like these – accidents generated by inattention or stupidity – are easily preventable by tireless digital components whose only job is to monitor for specific errors. If traffic signals had been installed as part of the internet of things, that could be done without artificial intelligence: the timing system could broadcast the signal state through the vehicle sensors, which would prevent the front car from moving. But since that system is not in place, engineers use AI to interpret camera pictures to determine the state of the lights. Obviously the AI algorithms must be at least equal to the judgment of a attentive human driver, which means that the correctness standard must be high.

But the motivation for the development of the AI systems is the inattentive teenager.

The more dangerous class of AI applications are those running in environments that humans cannot perceive at all. Obvious cases are industrial control (dangerous conditions) and electronic stock trading (high speed). The motivation here is profit, pure and simple. When an opportunity presents itself, the speed and precision of the response is paramount. Conversely, however, when the algorithm acts in error, that error is compounded more rapidly than humans can intervene.

Again, this is not new: in the 1700s, the British crown commissioned governors to manage its far-flung empire, and could control abuse of that authority only through the exchange of letters delivered by ships. In that situation, power was distributed and compartmentalized: the thirteen American colonies had governors and parliamentary bodies to resist executive misdeeds.

This is also the approach taken with training of natural learning systems: children. We don’t give children absolute authority over their lives. In fact, wise parents extend such authority only gradually as competency is demonstrated.

This is an approach to the problem of developing and deploying AI systems. No single system should be deployed on its own. Instead, they should be deployed in communities, with a managerial algorithm that polls the proposed actions and allows implementation only when consensus exists. The results are fed back into the training system, and the polling weighted towards the most effective algorithms. When a Newton or Einstein arises from the community – an AI system that always produces the best result – only then is absolute authority conferred.

Until the system changes. For example, a robot housekeeper may operate on high power until a baby is brought home, and then be forced back into low-power mode until it has adapted to the presence of the unpredictable element brought into its demesne.

The Watcher Watchers

Bill Gates is teaming up with two corporations to build a system that will produce real-time images of the earth on demand. This might allow citizens to monitor the activities of nation states as they unfold. The only point of doubt: the compute power on each satellite is slated to be 10x the combined processing power of all existing satellites in orbit. Either they’re going to use extremely low-power technology, or have dauntingly large solar panel arrays…

Or existing satellites are really, really dumb.

Own It, Zuckerberg

When I started blogging, I entered the online world through the enlightened portal at Zaadz. Zaadz was a mediated forum for spiritual dialog. Its founder, Brian Johnson, hired a technology team to ensure that the forum facilitated meaningful dialog.

Among the features unique to the platform were:

  • Comment threading, with the ability to block threads.
  • The ability to ignore content from any account.

Basically, it was up to each participant to manage their experience, and to chose to interact with those that maintained civil rapport. Even with these features, the final paid moderator was at her wits’ end trying to keep apart the warring parties, often deleting acrimonious threads and banning people from forums.

That was bad enough, but my experience of social media since then has only gone downhill. As a person who can’t devote hours each day to social media, there are two problems: people that like to natter about anything and everything, and people with a vested interest in control of messaging. The former recreate the small-town neighborhood; the latter generate virtual cults.

As a generator of ideas, I find little gain in the former, and the structure of most social media platforms plays into the hands of the administrators of the latter. The critical failure is the “most recent comments” feature that pushes serious discussion off the screen.

Obviously, filtering mechanisms such as those provided by Zaadz are essential. I think that AI has a role to play. Among the features that would be helpful, and seem within reach of the current generation of technology:

  • Relevance to the original post.
  • Similarity to other comments (suppressing reposts).
  • Civility (profanity and character attacks).

To this I would add comment threading.

All of these models could be run on the end-user machine, which would protect Facebook’s revenue model. But they should be developed and their usage monitored by Facebook to evaluate the social health of the communities they manage.