A structure billows with smoke at its tallest point and then crumbles. Any human this week would have recognized this image of destruction as the burning of Notre Dame Cathedral in Paris. A machine at YouTube, however, did not: It saw 9/11 instead.

YouTube’s tool for appending accurate information to sensitive content failed Monday when Notre Dame’s spire fell from the sky. Instead of providing details about the event alongside three news accounts’ live streams, or simply leaving those videos unannotated, YouTube’s algorithm added a box explaining the attack on the twin towers more than 17 years ago.

The mishap was at once ironic and instructive. YouTube built its “information panels” to fight misinformation, in response in part to a conspiracy theory accusing a survivor of the Parkland, Florida, high school shooting in February 2018 of being a “crisis actor” that ended up trending on its platform. Yet by linking what appears to have been an awful accident to terrorism, the panels promoted hoaxes and confusion across social media sites. There’s a lesson there: As platforms finally start to take responsibility for their role in curating what appears on their turf, they must recognize that real responsibility means that for the foreseeable future, humans – not only machines – will have to do much of the work.

Though YouTube pledged in late 2017 to have more than10,000 moderators, and Facebook reportedly has about 15,000, some continue to insist that algorithms are the eventual answer to the scourge of illegal and otherwise dangerous content. YouTube’s decision to base its information panel project on the judgments of computers was a case in point. The company was trying, rightly, to correct for users’ fondness for hoaxes – but instead of involving humans in the process, it gave computers the job.

Certainly, human review cannot possibly be applied to every addition to a worldwide open platform the way editors watch over traditional media. That is the price we pay for access to such far-reaching stores of information. But as effective and efficient as machines may be at enforcing basic rules, and as essential as they are for triaging inundations of posts and uploads, there are some things they may never do “better” than we can with our own flawed minds. That’s especially true in areas where context is key, such as hate speech, conspiracy theories and, yes, breaking news events.

Perhaps someday the algorithms of companies such as YouTube will be able to distinguish one burning building from another. Right now, though, the assumption should be not that these systems will inevitably save us but the opposite: that these systems, at least for now, will fall short. And in the most sensitive cases, such as the misinformation epidemic that YouTube’s panels were designed to help contain, people – not only programs – need to be paying attention.

filed under: