A joint statement, endorsed by Britain, France and Italy, said international leaders had “challenged” Silicon Valley to build technology aimed at ensuring that internet users “tempted by violent extremism are not exposed to content that reinforces their extremist inclination – so-called algorithmic confinement.”
The prime minister of the Netherlands, Mark Rutte, urged big companies to help small companies, especially those that offer users ways to communicate anonymously.
Julie Bishop, the foreign minister of Australia, said she valued free expression. But the internet, she said, “cannot be an ungoverned space where terrorists operate.”
Not so long ago, these sentiments would have been dismissed by many in libertarian-minded Silicon Valley. Any suggestion that the internet be governed was unacceptable. The ability to be anonymous, or to use pseudonyms on the internet, was seen as a virtue, especially on Twitter. So too was unfettered speech.
But the success of terrorist groups in exploiting social media platforms to promote their agendas is now putting internet brands in an uncomfortable position, and the industry has been forced to address the problem.
At Wednesday’s event, called the Leaders Meeting on Preventing Terrorist Use of the Internet, representatives of the world’s most prominent technology companies described how they have been responding. They rushed to demonstrate what they were doing to take down terrorist propaganda from their platforms and pledged to do more.
Facebook said that it was using artificial intelligence to identify when “terrorist imagery” was uploaded to the site, and that it had established a special team to assist with law enforcement requests for information about terrorist attacks.
Monika Bickert, head of global policy management at Facebook, said the company had 150 people, including engineers and language specialists, “working primarily to counter terrorism.”
“We maintain a specialized terrorist threat team that responds within minutes to emergency requests from law enforcement,” said Ms. Bickert, a former federal prosecutor. “And if we become aware of a credible threat of real world harm, we proactively reach out to authorities and inform them.”
Twitter’s annual transparency report took pains to say that Twitter had taken down more than 935,000 accounts in roughly the last two years, and that most of those had been detected by the company’s own tools, before anyone flagged them.
Google said it was targeting messages intended to change the minds of those who searching for what the company identifiesas terrorist content.
All three companies, as well as Microsoft, came together earlier this year to establish what they call the Global Internet Forum to Counter Terrorism.
But the companies also cautioned that technology alone was insufficient for the task. Machines cannot always distinguish between what is dangerous and what has social value, they said. Videos uploaded to YouTube by human rights groups to show atrocities, for example, were once taken down mistakenly, said Kent Walker, general counsel at Google, which owns YouTube.
“Machines,” he said, “are not yet at the stage where they can replace human judgment.” And Mr. Walker offered a sobering note of caution. “There is no magic computer program,” he told the room of foreign dignitaries, “to eliminate terrorist content.”
The joint statement by Britain, France and Italy implicitly acknowledged how vexing the problem is.
“No individual nation state can respond to this threat alone,” they said. “The response must be global, and it must be collaborative.”