There is No ‘Learn’

Zayd Enam at Stanford University has posted an accessible discussion of why developers struggle to perfect artificial intelligence systems. The key point is that patrons of AI development aren’t willing to build an algorithm and turn it loose on the world. They expect the algorithm to be trained against a set of representative inputs, and the responses evaluated to assure that the frequency and severity of improper responses poses tolerable risk.

This is not a new principle. Auto manufacturers model collisions to identify “worst case” scenarios where structural elements will interact to kill passengers that would normally have survived the initial collision. They balance the likelihood of these scenarios to produce the “safest car possible.” In most cases, auto safety systems (air bags, crumple zones) will save lives, but in some cases they will kill.

It’s not evil, it’s unavoidable.

The problem with AI is that it assumes control of decision making. In an auto accident, human beings make the decisions that result in the accident. The decisions are made with a rapidity that is perceptible to other drivers, who presumable can take action to protect themselves. This happens every day.

Of course, we’ve all seen a texting teenager drive through an intersection on the red when cars start moving with the green left-turn arrow. Situations like these – accidents generated by inattention or stupidity – are easily preventable by tireless digital components whose only job is to monitor for specific errors. If traffic signals had been installed as part of the internet of things, that could be done without artificial intelligence: the timing system could broadcast the signal state through the vehicle sensors, which would prevent the front car from moving. But since that system is not in place, engineers use AI to interpret camera pictures to determine the state of the lights. Obviously the AI algorithms must be at least equal to the judgment of a attentive human driver, which means that the correctness standard must be high.

But the motivation for the development of the AI systems is the inattentive teenager.

The more dangerous class of AI applications are those running in environments that humans cannot perceive at all. Obvious cases are industrial control (dangerous conditions) and electronic stock trading (high speed). The motivation here is profit, pure and simple. When an opportunity presents itself, the speed and precision of the response is paramount. Conversely, however, when the algorithm acts in error, that error is compounded more rapidly than humans can intervene.

Again, this is not new: in the 1700s, the British crown commissioned governors to manage its far-flung empire, and could control abuse of that authority only through the exchange of letters delivered by ships. In that situation, power was distributed and compartmentalized: the thirteen American colonies had governors and parliamentary bodies to resist executive misdeeds.

This is also the approach taken with training of natural learning systems: children. We don’t give children absolute authority over their lives. In fact, wise parents extend such authority only gradually as competency is demonstrated.

This is an approach to the problem of developing and deploying AI systems. No single system should be deployed on its own. Instead, they should be deployed in communities, with a managerial algorithm that polls the proposed actions and allows implementation only when consensus exists. The results are fed back into the training system, and the polling weighted towards the most effective algorithms. When a Newton or Einstein arises from the community – an AI system that always produces the best result – only then is absolute authority conferred.

Until the system changes. For example, a robot housekeeper may operate on high power until a baby is brought home, and then be forced back into low-power mode until it has adapted to the presence of the unpredictable element brought into its demesne.

The Watcher Watchers

Bill Gates is teaming up with two corporations to build a system that will produce real-time images of the earth on demand. This might allow citizens to monitor the activities of nation states as they unfold. The only point of doubt: the compute power on each satellite is slated to be 10x the combined processing power of all existing satellites in orbit. Either they’re going to use extremely low-power technology, or have dauntingly large solar panel arrays…

Or existing satellites are really, really dumb.

Own It, Zuckerberg

When I started blogging, I entered the online world through the enlightened portal at Zaadz. Zaadz was a mediated forum for spiritual dialog. Its founder, Brian Johnson, hired a technology team to ensure that the forum facilitated meaningful dialog.

Among the features unique to the platform were:

  • Comment threading, with the ability to block threads.
  • The ability to ignore content from any account.

Basically, it was up to each participant to manage their experience, and to chose to interact with those that maintained civil rapport. Even with these features, the final paid moderator was at her wits’ end trying to keep apart the warring parties, often deleting acrimonious threads and banning people from forums.

That was bad enough, but my experience of social media since then has only gone downhill. As a person who can’t devote hours each day to social media, there are two problems: people that like to natter about anything and everything, and people with a vested interest in control of messaging. The former recreate the small-town neighborhood; the latter generate virtual cults.

As a generator of ideas, I find little gain in the former, and the structure of most social media platforms plays into the hands of the administrators of the latter. The critical failure is the “most recent comments” feature that pushes serious discussion off the screen.

Obviously, filtering mechanisms such as those provided by Zaadz are essential. I think that AI has a role to play. Among the features that would be helpful, and seem within reach of the current generation of technology:

  • Relevance to the original post.
  • Similarity to other comments (suppressing reposts).
  • Civility (profanity and character attacks).

To this I would add comment threading.

All of these models could be run on the end-user machine, which would protect Facebook’s revenue model. But they should be developed and their usage monitored by Facebook to evaluate the social health of the communities they manage.