The United States on Wednesday broke with 18 governments and top American tech firms by declining to endorse a New Zealand-led response to the live-streamed shootings at two Christchurch mosques, saying free-speech concerns prevented the White House from formally endorsing the largest campaign to date targeting extremism online.
The “Christchurch Call,” unveiled at an international gathering in Paris, commits foreign countries and tech giants to be more vigilant about the spread of hate on social media. It reflects heightened global frustrations with the inability of Facebook, Google and Twitter to restrain hateful posts, photos and videos that have spawned real-world violence.
Leaders from across the globe pledged to counter online extremism, including through new regulation, and to “encourage media outlets to apply ethical standards when depicting terrorist events online.” Companies including Facebook, Google and Twitter, meanwhile, said they’d work more closely to ensure their sites don’t become conduits for terrorism. They also committed to accelerated research and information sharing with governments in the wake of recent terrorist attacks.
The call is named after the New Zealand city where a shooter killed 51 people in a March attack broadcast on social-media sites. Facebook, Google and Twitter struggled to take down copies of the violent video as fast as it spread on the web, prompting an international backlash from regulators who felt malicious actors had evaded Silicon Valley’s defenses too easily. Before the attack, the shooter also posted online a hate-filled manifesto that included references to previous mass killings.
New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron organized the call to action, part of Ardern’s international plea this year for greater social-media accountability. Along with New Zealand and France, countries such as Australia, Canada and the United Kingdom endorsed the document, as did tech giants including Amazon, Facebook, Google, Microsoft and Twitter.
“We’ve taken practical steps to try and stop what we experienced in Christchurch from happening again,” Ardern said in a statement.
America’s top tech giants celebrated the call – a voluntary effort, not full regulation – as an important step toward tackling one of the Web’s biggest challenges. Amazon, Facebook, Google, Microsoft and Twitter issued a joint statement saying “it is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”
But the White House opted against endorsing the call to action, and President Trump did not join world leaders and tech executives in attending the gathering in Paris. In a statement, U.S. officials said they stand “with the international community in condemning terrorist and violent extremist content online,” and support the goals of the Christchurch call to action. But the White House still said it is “not currently in a position to join the endorsement.”
A day earlier, as negotiations progressed, White House officials raised concerns that the document might run afoul of the First Amendment.
“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the White House said Wednesday. “Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”
Around the world, the Christchurch attack sparked renewed scrutiny of social media. Facebook, Google and Twitter each have hired thousands of reviewers and created new artificial-intelligence tools with the goal of thwarting hate speech, extremism and terrorism online. Despite those efforts, the tech giants were unable to stop the spread of the Christchurch videos.
Fewer than 200 people watched the live stream during the attack, which Facebook said it removed 29 minutes after it began. But within 24 hours, users had attempted to re-upload the video onto Facebook more than 1.5 million times. About 300,000 of those videos slipped through and were published on the site before being taken down by the site’s content-moderation teams and systems designed to automatically remove blacklisted content.
In response, tech companies on Wednesday committed to enforcing policies that prohibit terrorist and extremist content, improving technology that can spot harmful posts in real time and issuing regular reports about their progress. Facebook, Google and Twitter already have such rules and tools in place, though at times they’ve fielded sharp criticism for failing to use them effectively. They also agreed to share more information among each other, particularly to stop the real-time spread of extremism during emergencies like the Christchurch attack.
These companies also promised to implement “appropriate checks on livestreaming,” with the aim of ensuring that videos of violent attacks aren’t broadcast widely, in real time, online. To that end, Facebook this week announced that users who violate its rules – such as sharing content from known terrorist groups – could be prohibited from using its live-streaming tools. The company has said such a restriction might have prevented the Christchurch shooter from broadcasting the attack using his account.
“Terrorism and violent extremism are complex societal problems that require an all-of-society response,” Amazon, Microsoft, Facebook, Google and Twitter said in their joint statement. “For our part, the commitments we are making today will further strengthen the partnership that Governments, society and the technology industry must have to address this threat.”
The Christchurch call reflects heightened global frustrations with Silicon Valley. Around the world, regulators have introduced or adopted tough new rules over the past year that require social-media sites to take down offensive content faster or face tough fines.
U.S. officials also have struggled with the rise of online extremism and its ability to incite real-world violence. Self-proclaimed neo-Nazis used Facebook as an organizing tool ahead of their deadly 2017 rally in Charlottesville, Virginia, for example, and the shooter who opened fire on a synagogue in Pittsburgh last year long had posted anti-Semitic screeds on fringe websites.
But even federal policymakers who have grown furious with Silicon Valley have struggled to rein in the industry without violating the First Amendment, which protects even some repugnant speech online. The issue loomed large over U.S. officials as they decided whether to endorse the Christchurch call, sources told the Post.
The White House’s stance drew criticism some experts who’ve called for stronger regulation across the Web. Alistair Knott, a computer-science professor at the University of Otago in New Zealand, said the absence of a U.S. endorsement potentially would undercut the global argument for controlling how hate and violence spread online.
“It seems insufficient to say that free speech prevents the U.S. from doing something about violent extremist attacks,” said Carl Tobias, a professor at the University of Richmond law school. “Congress should consider carefully crafted legislation that both protects core First Amendment interests and public safety.”
But others worried the Christchurch document could potentially blur the lines between government power and free expression.
“It’s hard to take seriously this administration’s criticism of extremist content, but it’s probably for the best that the United States didn’t sign,” said James Grimmelmann, a Cornell Tech law professor. “The government should not be in the business of ‘encouraging’ platforms to do more than they legally are required to – or then they could be required to under the First Amendment.”
“The government ought to do its ‘encouraging’ through laws that give platforms and users clear notice of what they’re allowed to do, not through vague exhortations that can easily turn into veiled threats,” Grimmelmann said.
Adrian Shahbaz, a research director at Freedom House, a think tank partially funded by the U.S. government, said he was “alarmed by the vague call for governments to ban more speech” in a way that could have “negative consequences for human rights.”
Greater regulation on tech companies is needed, but “we shouldn’t be calling on tech companies to remove content without also demanding that they act with far more transparency and accountability,” he said. “Otherwise, companies will censor first and ask questions later, leaving users will little recourse to appeal poor decisions and uphold their right to free expression.”
Send questions/comments to the editors.