Donald Trump's campaign promise to close parts of the internet to the Islamic State has not materialized. However, the depletion of the terrorist organization's territory, as well as moderation on social media platforms and international law enforcement action, has resulted in a decline in its online activity during his presidency.
In December 2015, Trump called for "closing that internet up in some way" to impede the recruitment of ISIS fighters. He floated the idea of having Bill Gates, the co-founder of Microsoft, look into the possibility.
Nearly five years later, there have been several successes in the effort to defeat ISIS. By January 2018, the terrorist organization had lost 93% of its territory in the Middle East. In October 2019, Abu Bakr al-Baghdadi, the leader of ISIS, died in a raid by U.S. forces.
Still, ISIS has thousands of fighters and continues to recruit and publish propaganda online. Unlike other countries, Congress has not yet passed legislation that would require internet service providers or social media platforms to remove terrorist propaganda (although some legislators have introduced bills that would address it).
But many of them have done so, anyway.
Since Trump's election, artificial intelligence has helped social media platforms remove content from terrorist organizations. Facebook has reported that it automatically removes more than 99% of ISIS and al-Qaida content published on its platform. YouTube reported that, between October and December 2019, about 90% of videos that violated the company's policy against violent extremism were removed before they had 10 views.
In May 2019, prompted by the terrorist attack at mosques in Christchurch, New Zealand — which was live-streamed on Facebook — several governments and online service providers adopted a non-binding pledge called the "Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online." Tech companies like Twitter, Facebook and Google pledged to take additional measures to address terrorist content on their platforms, such as updating terms of service and improving technology that detects terrorist content.
Although it has endorsed similar intergovernmental initiatives, the U.S. did not sign the pledge "due to policy and legal concerns," according to a 2019 State Department terrorism report.
"It is true that the big tech platforms have somewhat cleaned up their acts in this regard, most especially Twitter," said Cori Dauber, a communication professor at the University of North Carolina at Chapel Hill, in an email. "But none of that, as far as I know, is a function of federal action — the platforms were responding to pressure, sure, but they've been under pressure on these issues for years."
Even if the U.S. did pass legislation that forced internet platforms to ban or block terrorist propaganda, the First Amendment could stand in the way. Federal law also generally protects internet service providers from legal responsibility for what users do or say on their platforms, which is different from European countries like France, which has passed legislation forcing tech platforms to remove terrorist content.
Trump has tried to challenge those protections — not to counteract terrorists but as a threat to ensure that conservative voices aren't silenced by content moderators. In a May 2020 executive order, the president called for limitations on Section 230(c) of the Communications Decency Act, the law that grants tech platforms their legal immunity. The order argues that, if platforms restrict certain voices, their protections should be revoked. Law experts have said that, absent legislation from Congress, the order would not be legally binding.
Finally, even if Trump did manage to ban ISIS from certain parts of the internet, it's unlikely that would stop the organization's recruitment.
In April 2018, authorities in eight countries took part in an operation that seized ISIS servers that hosted some of its propaganda apparatus. A year and a half later, Interpol, an international police organization in Europe, announced that it had stripped ISIS propaganda from platforms like Twitter and Google. Telegram, an encrypted messaging service, contained the most offending material.
However, shortly following the takedown, the BBC reported that ISIS supporters were talking about moving to other, more underground platforms to evade authorities and promote propaganda. And ISIS is known to use the darknet, parts of the internet that are anonymized and not searchable through traditional search engines like Google.
"Going after terrorist communications networks is kind of like treating a gunshot wound with a Band-Aid," said Patrick Eddington, a research fellow at the Cato Institute. "The real key to defeating them is defeating the ideology."
Despite American tech companies' in-house efforts to moderate and remove terrorist content, as well as international efforts to take down propaganda, it's clear that ISIS is still using parts of the internet to publish propaganda and recruit new members. And the U.S. government has not passed legislation that compels internet service providers to ban content from terrorist organizations.
"You can't really 'shut down' or 'close down' the internet," said Chelsea Daymon, a terrorism and political violence researcher at American University. "While you can remove content and shut down accounts that are pro-ISIS (which will slow the spread of content reach), the concept of shutting down parts of the internet shows a misunderstanding about what the internet is and how it works."
We rate this promise Stalled.