Home » Technology » There is No ‘Learn’

There is No ‘Learn’

Zayd Enam at Stanford University has posted an accessible discussion of why developers struggle to perfect artificial intelligence systems. The key point is that patrons of AI development aren’t willing to build an algorithm and turn it loose on the world. They expect the algorithm to be trained against a set of representative inputs, and the responses evaluated to assure that the frequency and severity of improper responses poses tolerable risk.

This is not a new principle. Auto manufacturers model collisions to identify “worst case” scenarios where structural elements will interact to kill passengers that would normally have survived the initial collision. They balance the likelihood of these scenarios to produce the “safest car possible.” In most cases, auto safety systems (air bags, crumple zones) will save lives, but in some cases they will kill.

It’s not evil, it’s unavoidable.

The problem with AI is that it assumes control of decision making. In an auto accident, human beings make the decisions that result in the accident. The decisions are made with a rapidity that is perceptible to other drivers, who presumable can take action to protect themselves. This happens every day.

Of course, we’ve all seen a texting teenager drive through an intersection on the red when cars start moving with the green left-turn arrow. Situations like these – accidents generated by inattention or stupidity – are easily preventable by tireless digital components whose only job is to monitor for specific errors. If traffic signals had been installed as part of the internet of things, that could be done without artificial intelligence: the timing system could broadcast the signal state through the vehicle sensors, which would prevent the front car from moving. But since that system is not in place, engineers use AI to interpret camera pictures to determine the state of the lights. Obviously the AI algorithms must be at least equal to the judgment of a attentive human driver, which means that the correctness standard must be high.

But the motivation for the development of the AI systems is the inattentive teenager.

The more dangerous class of AI applications are those running in environments that humans cannot perceive at all. Obvious cases are industrial control (dangerous conditions) and electronic stock trading (high speed). The motivation here is profit, pure and simple. When an opportunity presents itself, the speed and precision of the response is paramount. Conversely, however, when the algorithm acts in error, that error is compounded more rapidly than humans can intervene.

Again, this is not new: in the 1700s, the British crown commissioned governors to manage its far-flung empire, and could control abuse of that authority only through the exchange of letters delivered by ships. In that situation, power was distributed and compartmentalized: the thirteen American colonies had governors and parliamentary bodies to resist executive misdeeds.

This is also the approach taken with training of natural learning systems: children. We don’t give children absolute authority over their lives. In fact, wise parents extend such authority only gradually as competency is demonstrated.

This is an approach to the problem of developing and deploying AI systems. No single system should be deployed on its own. Instead, they should be deployed in communities, with a managerial algorithm that polls the proposed actions and allows implementation only when consensus exists. The results are fed back into the training system, and the polling weighted towards the most effective algorithms. When a Newton or Einstein arises from the community – an AI system that always produces the best result – only then is absolute authority conferred.

Until the system changes. For example, a robot housekeeper may operate on high power until a baby is brought home, and then be forced back into low-power mode until it has adapted to the presence of the unpredictable element brought into its demesne.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s