Are you a tech company interested in learning about moderation procedures for terrorist and other harmful content, whilst respecting human rights and limiting impact on freedom of speech?
Are you interested in learning more about existing content moderation features and approaches deployed within the global tech sector?
Are you a researcher, government official, or civil society representative interested in learning more about the strategic objectives of content moderation, the challenges of of focusing solely on content removal, and the benefits of considering other strategies?
Tech platforms’ content moderation practices have significant impact on online speech and freedom of expression, especially when the default option is content removal. To safeguard freedom of expression online, it is important that tech platforms, including smaller companies, have the appropriate tools to moderate harmful and illegal content in a proportionate manner. A range of content moderation measures that allows for a targeted response to terrorist and harmful content, or for the empowerment of users by letting them decide on whether to engage with certain content, can thus help to ensure a necessary and proportionate response to terrorist and violent extremist use of the internet.
As part of our e-learning webinar series, organised in partnership with the Global Internet Forum to Counter Terrorism (GIFCT), this session will look at content moderation practices within the tech sector, the objectives they serve and desired outcomes. In particular we will focus on strategies deployed by tech companies to ensure an efficient and appropriate moderation of their platforms without solely relying on content removal. In doing so, we will question the efficiency and challenges related to content removal and deplatforming for terrorist and violent extremist material and actors, weighing this up in relation to other moderation strategies.