The Indonesian government began regulating online content in 2008, when Law No.11 on Electronic Information and Transaction was passed, and, according to Freedom House’s 2021 Freedom on the Net report on Indonesia,1Freedom House (2021), Freedom on the Net: Indonesia. routinely requires online content to be removed. In 2021,2As of April 2021 the Ministry of Communications and Informatics reportedly had 20,453 pieces of online terrorism content removed.3Freedom House (2021) However, platforms were until recently exempted from legal liability; this changed with the passing of Ministerial Regulation 5 (MR5), which holds platforms legally liable for user-generated content and makes safe harbour protection conditional on platforms’ compliance with legal requirements and cooperation with the Indonesian authorities. Indonesia’s restrictions on online content can also be understood in the context of the country’s counterterrorism provisions and the government’s use of internet throttling and shutdowns, as well as social media blockages, at times of political unrest. This broader environment has led human rights organisations to raise concerns about the risk of infringing on fundamental rights, and on the right to freedom of expression in particular.

Indonesia’s regulatory framework:

  • Ministerial Regulation 5, 2020, the so-called “Social Media Law”, November 2020.4With a 6-months transitional period Clarifies the legal requirements and obligations for foreign “private electronic system operators”, although the regulation applies to both national and international platforms. Applies to all types of online services, including content-sharing and instant messaging services.
  • Law No.15 of 2003: The country’s counterterrorism framework, which was passed into law following the 2002 Bali bombings. The law was amended in 2018, raising numerous human rights concerns.8The law was first proposed in 2016, but met political opposition on the basis of the significant role afforded to the military. After a series of suicide bombings in 2018 in East Java and Sumatra, the President threatened to imposed it by decree and the law was passed
    • The law does not specifically deal with online content and terrorist use of the internet; however, it allows law enforcement to intercept and examine any information “expressed, sent, received, or stored electronically.”
    • Art. 1(2) of the law defines terrorism as: “[the use of] violence or threat of violence to create a widespread atmosphere of terror or fear, resulting in mass casualties and/or causing destruction or damage to vital strategic objects, the environment, public facilities, or an international facility.”9Human Rights Watch (2018), Letter on Indonesia’s new counterterrorism law.
    • Art. 14 further defines “threat of violence” to include any form of speech via electronic or non-electronic means “which may incite fear in a person”.10Human Rights Watch (2018)
    • The law also prohibits incitement to terrorism, without specifying the mode of communication. Individuals have already been prosecuted under this law for activities related to terrorist use of the internet, including registering and hosting a website.11UNODC (2012), The use of the Internet for Terrorist Purposes.

Relevant national bodies:

  • The Kominfo, Ministry of Communication and Information: responsible for information and communications affairs in Indonesia and for issuing the ID certification for tech platforms under the MR5.
  • Indonesia Security Incident Response Team on Internet Infrastructure: a public private partnership initiative, which conducts research and monitoring on “threats and disruptions to internet-protocol-based telecommunication networks.”12UNODC (2012 The Response team includes the country’s police and Attorney General’s office and the Indonesian Internet Service Providers Association, amongst others.
  • National Cyber and Encryption Agency:13Ng Melissa (2017), Indonesia establishes new cybersecurity and encryption agency, Talking Tech established by Presidential Regulation No.43 in 2017, the Agency is to protect Indonesia from cyber attacks, increase public awareness of cyber-related issues, and both superintend and coordinate cyber-related government agencies. The Agency is a non-ministerial body and is accountable to the President via the coordinating Minister for Political, Legal and Security Affairs.14Ibid The Agency also has the power to monitor online content.15Freedom House (2021)

Key takeaways for tech platforms:

Law No.11 of 2008 and its 2016 amendment:

  • Indonesian authorities can request platforms to remove and block content that violates applicable laws and regulations, including “immoral content, hate speech, insult or defamation”.16Molina Kristo (2016)
  • Platforms have to comply with a retention period for electronic content, which “obliges every system operated by an electronic system provider to reproduce in complete form any electronic information and/or electronic document for the duration of the retention period stipulated under the law”.17UNODC (2012), The use of the Internet for Terrorist Purposes.

Ministerial Regulation 5

  • The MR5 introduces a number of stringent requirements for tech platforms, including legal liability for user-generated content if the platform does not comply with the law or cooperate with the country’s authorities.
  • The MR5 includes three key pillars:
    1. Platform registration;
    2. Removal of prohibited content;
    3. Law enforcement access to electronic systems and data.
  • All platforms with online users in Indonesia,18Although it is unclear to which platform the MR5 applies in practice even if based abroad, are required to register with the Kominfo and obtain an ID certification19During the transition period, platforms initially had until 24 May 2021 to register. The deadline then became 6 months after the implementation of a single sign-on syste to be allowed to operate in Indonesia. Included in the information platforms have to provide to register are details about the number of users and the value of transactions in Indonesia. Platforms failing to register can be blocked by the Kominfo from operating in the country.
  • Prohibited content is defined in the MR5 as “anything that violates any provision of Indonesia’s laws and regulations, or creates ‘community anxiety or ‘disturbance in public order.’”
    • The Kominfo is charged with providing a definition of “community anxiety” and “public disorder”.
    • Under the MR5, content that could be considered a means to provide access to prohibited information (for instance a tutorial on how to use a VPN) is also considered prohibited content and must be removed.
  • The headline provision of the law is the mandate for ISPs to facilitate removal of prohibited online content, with platforms required to:
    • Remove prohibited information or documents, as well as anything that could inform or provide access to prohibited content;
    • Ensure that the service neither contains nor facilitates the dissemination of prohibited content.
  • The MR5 empowers a “Minister for Access Blocking”20See: Rodriguez Katitza (2021), Indonesia’s Proposed Online Intermediary Regulation May be the Most Repressive Yet, Electronic Frontier Foundation to coordinate blocking requests which platforms have to comply with.
    • ISPs have 24 hours to respond to a request to remove content, and 4 hours for “urgent” requests, which include terrorism-related requests.
    • Platforms need to comply with removal and blocking requests or face fines and sanctions, with the ultimate sanction for non-compliance being the complete blocking of their services in Indonesia “even if the prohibited information [is] legal under international human rights law.”21Rodriguez (2021)
    • Law enforcement, judicial authorities, the Kominfo, or members of the public can send blocking requests. All requests go through the “Access Blocking Minister” who forwards them to tech companies.
    • Platforms are to appoint a local point of contact (PoC) responsible for responding to and facilitating content removal or data access orders.
    • The MR5 also requires platforms to disclose user information capable of identifying those posting prohibited content to facilitate surveillance “by administrative agencies and law enforcement.”
  • The law includes multiple provisions empowering access by law enforcement and the wider government to electronic systems and data, with platforms required to:
    • Provide law enforcement with access to electronic data or systems for “oversight” purposes. Although this access is limited to “oversight”, in line with Indonesian law, it does not require a court order and is therefore capable of being abused.23Article 19 (2021)
    • Grant law enforcement requests for access to electronic data (whether for monitoring or oversight purpose) within five days. No mechanisms are envisaged by which platforms might appeal requests for access to data.
  • The MR5 lists various sanctions if a platform fails to comply with the requirements, including:
    • Warning letters
    • Temporary suspension
    • Administrative penalties (fines)
    • Revocation of a registration certificate (suspension of the ISP’s services in the country)
  • Despite the transition period for platforms to register themselves with the Kominfo, the Ministry has already started to make use of the MR5 to require the removal of content. Freedom House reported in its 2021 Freedom on the Net report that the Kominfo used MR5 in April 2021 to require YouTube to block videos from an individual proclaiming to be the 26th prophet of Islam.24Freedom House (2021)

Tech Against Terrorism’s analysis and commentary

The MR5 risks flouting international human rights standards and best practice

The MR5’s broad terms and stringent requirement to remove content flagged by the Kominfo has led experts and civil society groups to warn that it risks putting Indonesia’s online regulatory framework in direct opposition to international human rights standards and also threatens freedom of expression online.

Like other online regulations analysed in the Online Regulation Series, including the UK and Poland, the scope of content prohibited under the MR5 is broad and goes beyond what is normally considered illegal under domestic law by covering content that can lead to “public unrest and disturbance of public order” or provide instruction on accessing prohibited content (for example, by means of a VPN). In so doing, the MR5 creates a differentiated legal regime for online speech and, according to the Global Network Initiative (GNI), “contradict[s] international best practice” by creating a double standard of “legality”.25Global Network Initiative (2021), GNI Expresses Concerns About and Calls on Indonesia to Reconsider the ‘MR5’ Regulation.

Concerns relating to international best practice have also been raised with regard to the MR5’s lack of precise definition of what constitutes prohibited content. This imprecision gives the Ministry the power to decide on what should be removed for being provocative of “public unrest and disturbance of public order”. As the GNI outlined in its analysis of the law, and as Tech Against Terrorism raised on multiple occasions throughout the first Online Regulation Series, the determination of limits to freedom of expression online should be undertaken by an independent judicial authority, and not by an executive authority charged with enforcing the law. This would conform to international human rights standards, the MR5’s incompatibility with which is heightened by explicit statements made by the Indonesian government that platforms should comply with Kominfo’s orders to remove prohibited content even if the content is otherwise considered legal under international human rights law.26Rodriguez (2021)

The broadness and vagueness of MR5’s prohibited content category should also be understood in the context of the already broad definition of terrorism in Indonesia’s counterterrorism framework – Law No.15 of 2003, as amended in 2018. At the time the counterterrorism law was amended, human rights organisations, including Amnesty International27Global Network Initiative (2021) and Human Rights Watch,28Human Rights Watch (2018) expressed concerns that the broad scope of the law could be used to restrict freedom of expression and target political dissent as terrorist activity. The Global Network Initiative’s analysis of the MR5 underlined how the combined broad definitions of terrorism in Law No.15 and the stringent requirements of MR5 “are usually a recipe for overbroad content removal and other unintended consequences”.29Global Network Initiative (2021)

The criminalisation of providing instruction on how to access prohibited content also undermines the right to seek information. As underlined by Article19’s analysis of the law, circumvention tools can be used for legitimate reasons and notably to protect one’s right to privacy online. Article19 also references the report by the UN Special Rapporteur on Freedom of Opinion and Expression on encryption and anonymity in the digital age, which states that both encryption and anonymity should be protected to ensure the privacy and security necessary for freedom of expression online, and that limitations to either should be therefore be necessary, proportionate, and legitimate – in line with international human rights standards. However, the broadness of scope and significant sanctions of the MR5 are in breach of international human rights standards according to Article19.30Article 19 (2021)

Unclear legal liability scheme for ISPs’ employees

MR5’s requirement for tech companies to establish a Point of Contact (PoC), responsible for content removal and data access, creates an unclear liability framework in which platforms’ employees may acquire individual liability for corporate actions. Article19’s analysis of the MR5 details the human rights concerns associated with the appointment of a local employee responsible for government requests and stresses that individuals so appointed may “face heightened risk[s] of reprisal or judicial harassment” and therefore engage in self-censorship or pro-active removal to obviate government pressure.31Article 19 (2021)

In our first edition of the Online Regulation Series Handbook, Tech Against Terrorism warned against the introduction of legal liability for platforms’ employees which would risk criminalising those acting against the dissemination of terrorist content instead of those sharing such content. Concern over legal liability for platforms’ employees are also shared by other digital rights associations, including the Global Network Initiative.32Global Network Initiative (2021)

The MR5 compromises the security and privacy of end-to-end encryption

As raised in our landmark report on “Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies”, Tech Against Terrorism cautions against government regulations that require tech platforms to modify their encryption systems and processes, or could weaken encryption by mandating tech companies to introduce monitoring tools.

By applying indiscriminately to all types of online services, and to public and private communications alike, the MR5 threatens online security and privacy with its requirement for tech companies to monitor encrypted communications and thereby counter the dissemination of prohibited content

Requirements for platforms to grant law enforcement access to electronic data and systems (both for monitoring and oversight purposes) threatens encryption, because it is impossible for tech companies offering E2EE to comply with such provisions and provide law enforcement with the required data. Given that tech platforms risk liability for user-generated content if they do not cooperate with the Indonesian authorities, this poses a direct threat to E2EE services in Indonesia and by extension to the online security and privacy of Indonesians; tech companies will have to modify their systems to comply with the law, presumably by abandoning E2EE, or withdraw from Indonesia entirely.

Tech Against Terrorism raised similar concerns for E2EE with regard to other legislation covering both private and public channels of communication in the first edition of the Online Regulation Series Handbook, notably in the case of Singapore (pp. 62 – 64) which was the first country to pass online regulation applying to all types of online services.

Lack of consideration for tech sector diversity and increased reliance on automated tools

Indonesia’s online regulatory framework, and in particular MR5, applies to all online platforms, regardless of their specific offering and resources. As Tech Against Terrorism has previously noted in its analysis of online regulation, indiscriminately applying regulation with stringent practical requirements to smaller and larger platforms alike risks punishing smaller and newer platforms for lacking resources instead of providing them with the support needed to counter terrorist material and other illegal content.

The “obligation of results”, whereby an outcome is mandated by law with little practical guidance, for platforms to remove or block access to all prohibited content, whilst ensuring that the services cannot be used to facilitate the dissemination of such content, strongly incentivises ISPs to rely on automated content moderation tools. As Tech Against Terrorism has previously highlighted in the Online Regulation Series Handbook, most automated tools currently lack the capacity to comprehend content and require human supervision to avoid excessive content takedown. An increased reliance on automated moderation solutions thus raises the risk of false positives in taking down content that is legal and raises questions about accountability in removal decisions. The use of automated solutions to detect and remove terrorist content is also not straightforward in practical terms. These solutions are no substitute for reasoned consensus on what constitutes a terrorist organisation, and their determinations must be informed by systematic proscriptions and designations by domestic and international organisations of government.

To learn more about the risks posed by lack of consideration for smaller platforms and the increased reliance on automated content moderation tools, see Section 1 of the Handbook on the State of Online Regulation (pp. 13 – 29).

To learn more about automated tools to counter terrorist use of the internet, existing challenges and recommendations, see Tech Against Terrorism’s report on “Gap Analysis and Recommendations for deploying technical solutions to tackle the terrorist use of the internet” – this report was drafted by Tech Against Terrorism as the chair of the Global Internet Forum to Counter Terrorism’s working group on Technical approaches.