Possibly in light of the weaponising of fake news during the 2016 American presidential election, and the increasing frequency with which acts of violence are being livestreamed, the debate around methods of moderation on the internet has seemingly heated up again. The crux of the problem is that more content is uploaded to popular platforms every minute than can be screened by even an army of human moderators, and the algorithms aren’t yet capable of parsing the nuances in text (let alone images or video) that would be required for them to make determinations on context, intent, and suitability. Add in the The Guardian’s recent reporting on Facebook’s policies for its human moderators, and the problem seems ever more intractable. On the flipside, I’m noticing the case being made more frequently that the seeming necessity of turning over more control to algorithms is in fact actively worsening the standard of content and the experience of internet use. In his recent talk at re:publica ‘17 in Berlin (titled ‘Notes from an Emergency’) Maciej Ceglowski phrased it this way:
The danger facing us is not Orwell, but Huxley. The combo of data collection and machine learning is too good at catering to human nature, seducing us and appealing to our worst instincts. We have to put controls on it. The algorithms are amoral; to make them behave morally will require active intervention.
This suggestion, that the algorithms are exacerbating the problem rather than assisting in eliminating it, is also echoed in this recent piece on Ev Williams in The New York Times:
The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.