Trolling – posting a comment on a discussion forum for the sole purpose of creating hostility among the participants – is the most dangerous threat to the exchange of ideas on the web. The tech community is seeking to moderate the damage. Some new artificial intelligence engines profile accounts, others monitor for certain logical constructs (for example, any statements starting “You think…”).
Engadget reports that a site in Norway disallows a comment until the poster demonstrates knowledge of the main article. A script engine presents a multi-choice question generated from the article, and disallows comments until it is answered correctly.
What would be even better is if they would verify comment relevance against the original post and the previous comments in the thread. Given that a script is generating the questions, and a script is able to answer them, it seems that should be possible. Maybe that could be verified by a question posed to the poster?
Unfortunately, all of these solutions play into the hands of state-run trolls, such as the Russian “fake media” mills. By generating scripts that determine the correct answer, they can post far more efficiently than others, and thus come to dominate forum contents.
Here’s another option: build an AI engine that ranks the relevance of comments against the article and discussion, and allow readers to filter content against that ranking. That would enable those seeking serious discussion not just to be protected from trolls, but also to skip past comments that are just socializing. Offering “Good point!” is nice, but posted enough times and more substantial commentary falls off the bottom of the page.