Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
Tech Against Terrorism Updates
This week, our director, Adam Hadley, spoke with POLITICO about violent far-right extremists’ attempts to radicalise and recruit supporters of US President Trump in the wake of last week’s storming of the Capitol. Read the article here.
Tech Against Terrorism published a response to the EU’s Digital Services Act. Read our full response here.
We submitted a response to the DSA consultation process, a summary of which can be found here.
We wrote an article for Counter Terror Business on terrorist use of the internet, and what we at Tech Against Terrorism are doing tackle the threat. Read ithere.
TCAP Office Hours: registration now open!
Following the success of the Terrorist Content Analytics Platform (TCAP) office hours in 2020, you can sign up for our January sessions, to be held on the 27 and 28 January. In these office hours, we will elaborate on our successful launch of automated terrorist content alerts to tech platforms.
You can register for the session on 27 January, at 5:00pm GMT, here, and for the session on 28 January, at 12:00pm GMT, here.
Last week’s reader’s digest covered content moderation actions taken by tech companies following the storming of the US Capitol by supporters of President Trump. This week, we provide updates on content moderation actions taken by tech companies since the last reader’s digest:
Tech company action in the aftermath of the Capitol storming
Twitter has permanently suspended the @realdonaldtrump account, “due to the risk of further incitement of violence”. It has additionally removed tweets from the @POTUS account, and announced that since 9 January, more than 70,000 accounts have been suspended, primarily those that have shared QAnon content.
Airbnb has announced that, in response to various local, state and federal officials asking people not to travel to Washington, D.C., that it will cancel reservations and block booking new reservations in the Washington, D.C. metro area during inauguration week.
Facebook & Instagram: Facebook has announced it would remove content mentioning “stop the steal” from Instagram and Facebook.
Snapchat has announced that it will terminate Donald Trump’s account on January 20 following an earlier decision to suspend the account.
YouTube has removed new content uploaded to the Donald J. Trump channel and has halted new uploads to the account for a week, as well as issuing a strike for violating the platform’s policies for inciting violence.
Google Play has suspended Parler’s app from the Play store. In a public statement Google has said, “in light of this ongoing and urgent public safety threat, we are suspending the app’s listings from the Play Store until it addresses these issues”.
Apple has removed Parler from its App Store, due to the platform not taking “adequate measures to address the proliferation of these threats to people’s safety”.
Amazon has removed Parlier from their cloud hosting service, due to “violent content” that violates its terms of service.
To learn more about various tech platforms’ actions following the Storming of Capitol Hill and ahead of the presidential inauguration, see First Draft’s comprehensive chart outlining actions and providing direct statements and policies referenced by each of the platforms.
In other news
Uganda has ordered internet service providers to block all social media platforms and messaging apps until further notice, according to a letter from the country’s communications regulator seen by Reuters.
The Program on Extremism at the George Washington University has launched a project to create a central database of court records related to the events of January 6, 2021, which will be updated as additional individuals are charged with criminal activities and new records are introduced into the criminal justice system.
DarkMarket, the world’s largest illegal marketplace on the dark web, has been taken offline in an international operation involving Germany, Australia, Denmark, Moldova, Ukraine, the United Kingdom (the National Crime Agency), the USA (DEA, FBI, and IRS) and Europol.
The Facebook Oversight Board Should Review Trump’s Suspension: In this piece, Evelyn Douek argues that the decision to suspend Donald Trump’s Facebook account should be submitted to Facebook’s Oversight Board. Douek terms the recent actions by online platforms, such as account suspension, against Donald Trump as the ‘Great Deplatforming’, noting how it has ignited a “raucous debate about free speech and censorship”, as well as “prompted questions about the true reasons behind the bans”. Douek explains how Facebook’s Oversight Board was built to “allay exactly these concerns”, stressing that Facebook should refer its decision to suspend Donald Trump’s account to the Board for review. Douek highlights an important issue: because Facebook has decided to suspend Trump’s entire account, and not merely remove individual pieces of content the president has posted, the suspension does not meet the criteria currently set out in the board’s bylaws for appeals. However, Douek explains that the company can refer cases to the board for review on its own initiative. She thus stresses that Facebook has the power to send this case to the board, and, given that the decision is “extremely controversial”, “polarizing”, and “an exercise of awesome power”, it definitely should. In addition, Douek underlines the fact that the decision raises an important question of how much Facebook should consider political and social context outside the platform in making its content moderation decisions. Asserting that the Oversight Board was “created to be a check and balance on Facebook’s decision-making processes”, Douek urges the Oversight Board’s review to ensure the decision’s legitimacy. (Douek, Lawfare, 11.01.2021).
Beyond Platforms: Private Censorship, Parler, and the Stack: This piece, by Jillian C. York, Corynne McSherry, and Danny O’Brien, discussed the recent actions from infrastructure companies such as Amazon Web Services, Google’s Android and Apple’s iOS app stores to cut off service to Parler, following the storming of the US Capitol. York, McSherry and O’Brien stress that these decisions should cause reflection, stating that the power of tech companies to suspend hosting or support speech that goes against a platform’s Terms of Service has different “risks when a group of companies comes together to ensure that certain speech or speakers are effectively taken offline altogether”. They argue that decisions to remove content or users made by internet service providers (ISPs), raise greater concerns for free expression, especially when there are few if any competitors. They further explain how companies like Facebook and YouTube include content moderation as part of the service they provide, whereas Amazon’s “ad-hoc decision to cut off” hosting Parler “in the face of public pressure, should be of concern to anyone worried about how decisions about speech are made in the long run.” York, McSherry and O’Brien argue that “infrastructure level takedowns move us further toward a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time”. Thus, they pose the questions, “who should decide what is acceptable speech, and to what degree should companies at the infrastructure layer play a role in censorship?” They argue the answer is that wherever possible, users should decide for themselves, while companies at the infrastructure layer should not. York, McSherry and O’Brien, Electronic Frontier Foundation, 11.01.2021).
This week we listened to The Lawfare Podcast episode in which Evelyn Douek and Quinta Jurecic speak with Johnathan Zittrain about the choices by Twitter, Facebook, and other platforms to ban Trump in the wake of the Capitol riot — and how it all intersects with the Section 230 debate.
Far-right violent extremism and terrorism
Canada considers adding Proud Boys to terrorist list alongside Isis and al-Qaida: In this piece, Leyland Cecco sheds light on Canadian official’s consideration to designate the far-right Proud Boys as a terrorist organisation, following their role in the “mob attack on the US Capitol” last week. The group, who’s founder is Canadian and which also operates in Canada, was banned by Facebook and Instagram in 2018 after violating the platforms’ hate policies and has been classified as an extremist organisation by the FBI. Cecco explains how, in recent years, the group has become a central figure in the violent white supremacist movement in the US, noting that “members view Donald Trump as a key ally”. According to Cecco, Canada’s public safety minister, Bill Blair, said his office was closely watching the Proud Boys and the “ideologically-motivated violent extremists” within the group. Cecco notes that the minister’s office had not said when a determination on the group’s status as a terrorist group will be made. He writes that a terrorist designation in Canada would mean that the group’s assets could be seized or forfeited by Canadian authorities. Cecco concludes by suggesting that this potential designation, along with the two neo-Nazi groups, Blood & Honour and Combat 18, which Canada added to its terrorism list in 2019, indicate that it “sees a growing threat from far-right organisations”. (Cecco, The Guardian, 11.01.2021).
Violent extremism is not a uniform phenomenon: The key differences in prevention of left-wing, right-wing, and Islamist extremism: In this piece, Rune Ellefsen and Jan Jämte discuss insights from a recent study they conducted, “Countering Extremism(s): Differences in local prevention of the left-wing, right-wing, and Islamist extremism”. Ellefsen and Jämte note that the distinct phenomena, left-wing extremism, right-wing extremism and militant Islamist extremism, are referred to under the common label of “violent extremism”. Ellefsen and Jämte underline how labelling the distinct phenomena under “violent extremism” emphasises commonalities and downplays their differences. Thus, they examined what distinguishes the ways in which public servants actually perceive and respond to the three milieus, and how these differences could be conceptualized. To do this they conducted interviews with twenty-seven public servants in Sweden, mainly frontline practitioners involved in local preventive work. According to their results, there is a “clear discrepancy between the uniform way violent extremism is presented in policy, and how front-line practitioners experience the different forms of extremism at the local level”. Their results demonstrate how the milieus are seen to differ in three crucial respects. First, they represent different levels and types of threats; second, they hold core values that resonate differently with dominant values in mainstream society; and third, they involve different challenges for the employment of countermeasures. Ellefsen and Jämte conclude that this demonstrates how a simplistic presentation in policy is “unhelpful and misleading” both for understanding the targeted milieus and the complexity of local preventive work. (Ellefsen and Jämte, Center for Research on Extremism, 08.01.2021)