Microsoft put up a speech-bot name ‘Tay’ on Twitter last week, and it took less than twenty-four hours for it to become a sexist Nazi. While labelled as “artificial intelligence,” Tay did not actually understand what it was saying – it merely parroted the speech of other users. On the /4chan/pol feed, that includes a lot of dialog that most of us would consider inappropriate.
What distresses is that Microsoft hoped to have Tay demonstrate the conversational skills of a typical teenager. Well, maybe it did!
In a recent dialog on the “liar Clinton,” I probed for specific proof, and received back the standard Fox News sound bites. When I described the Congressional hearings on Bengazi, the accuser had the grace to be chastened. This is typical of so much of our political dialog: people parrot sayings without availing themselves of access to the official forums in which real information is exchanged. The goal is to categorize people as “us” or “other,” with the goal of justifying arrangements for the distribution of power that benefit the “us.”
Donald Trump is a master of this political practice. Apparently his campaign doesn’t do any polling. He simply puts up posts on Facebook, and works the lines that people like into his speeches.
So I worry: did Microsoft actually succeed in its demonstration? Most American teenagers don’t understand the realities of the Holocaust or the difficulties of living under a totalitarian regime. In that experiential vacuum, do they actually evolve dialog in the same way that Tay did – with the simple goal of “fitting in?”
Somewhat more frightening is that Donald Trump appears to employ algorithms not too different from Tay’s. For God’s sake, this man could be president of the most powerful country in the world! He’s got to have more going on upstairs than a speech bot!
Fortunately, many teenagers, when brought into dialog regarding offensive speech, actually appreciate receiving a grounding in fact. You’d hope that our politicians would feel the same.