We have been at the forefront of pushing for online safety for communities affected by racism, especially by organised information operations. However, with government regulation comes a requirement for government transparency.
Governments must not interfere with an open, free and secure internet in hidden ways. Its actions must be out in the open, and affected users need to be notified. Citizens in a democracy expect to know what role their Government is playing with social media companies. Government transparency is critical to building and maintaining trust and confidence in the government.
The Australian Government also has an important role to play in demonstrating the benefits of transparency to companies. Below we summarise some of the findings internationally about what standards for transparency governments must meet.
- Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user-generated content and government surveillance;
- User notifications about government demands for their data and moderation of their content;
- Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and
- Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance.
Action Coalition on Meaningful Transparency
“Auditing” could be understood to refer to the following types of exercise, all of which are represented in the DSA, but have important differences:
1. Actions by professional accredited independent third parties (ie, “organisations” under article 37(3) performing article 37(1) audits) subject to their own industry obligations and professional codes and standards. This is the framework established by article 37 and is the primary subject of this briefing.
2. Actions by VLOPs/VLOSEs to conduct internal self-assessments of their technical and operational systems and decision-making process for systemic risks or non-compliance: for example, the risk assessment and mitigations frameworks, or the imposition of the internal compliance function on companies under the DSA (article 41).
3. Actions by external parties to critically analyse public materials by using other public materials: for example, testing the reliability of information in a transparency report (article 42) by comparing it with other transparency reports (including in other jurisdictions) or publicly available materials (such as audit reports published under article 37(4)). Article 40 also creates a kind of “audit by vetted researchers” in relation to systemic risks in the Union and to adequacy of risk mitigation measures.
In practice, these three areas operate together: organisations audit themselves; external auditors audit those organisations’ self-audits; and external parties analyse the materials produced by those internal and external audits. The resources collated in this briefing touch on all three categories, though focus primarily on the first.
Governments and other state actors should ensure that companies are not prohibited from publishing information detailing requests or demands for content or account removal or enforcement which come from state actors, save where such a prohibition has a clear legal basis, and is a necessary and proportionate means of achieving a legitimate aim.
Governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the request. Reporting should account for all state actors and, where applicable, include subnational bodies, preferably in a consolidated report.
Governments and other state actors should consider how they can encourage appropriate and meaningful transparency by companies, in line with the above principles, including through regulatory and non-regulatory measures.
Users should know when a state actor has requested or participated in any actioning on their content or account. Users should also know if the company believes that the actioning was required by relevant law. While some companies now report state demands for content restriction under law as part of their transparency reporting, other state involvement is not reported either publicly or to the actioned users. But companies should clearly report to users when there is any state involvement in the enforcement of the company’s rules and policies.
Specifically, users should be able to access:
- Details of any rules or policies, whether applying globally or in certain jurisdictions, which seek to reflect requirements of locallaws.
- Details of any formal or informal working relationships and/oragreements the company has with state actors when it comes to flagging content or accounts or any other action taken by the company.
- Details of the process by which content or accounts flagged by state actors are assessed, whether on the basis of the company’s rules or policies or local laws.
- Details of state requests to action posts and accounts.
When providing a user with notice about why their post has been actioned, companies should ensure that notice includes:
- Specific information about the involvement of a state actor in flagging or ordering actioning. Content flagged by state actors should be identified as such, and the specific state actor identified, unless prohibited by law. Where the content is alleged to be in violation of local law, as opposed to the company’s rules or policies, the users should be informed of the relevant provision of local law.
Tech Against Terrorism
These Guidelines are – just like Tech Against Terrorism’s Guidelines for tech companies – designed to drive increased transparency around a small set of core principles and key metrics to improve overall transparency from governments. The aim is to encourage governments to be more transparent and accountable towards their citizens and to allow for civil society oversight of government online counterterrorism activities.
Part A: Legal basis
Explain the legal basis for the activities undertaken by your government and its law enforcement agencies to discover, report and/or refer terrorist activity to tech companies, by detailing:
- Your country’s definition of terrorism as defined in legislative frameworks
- Your country’s definition of terrorist content (if any) in legislative frameworks
- Your country’s terrorist designation list(s) (if any) and/or the inter-governmental designation lists to which your country adheres
- Your country’s legislative framework’s provisions with regard to online terrorist activity
- The legislative framework that enables government entities and law enforcement in your country to send requests ordering action against online terrorist activity
- The international treaties your country has signed and ratified
Part B: Process & Systems
Explain the processes and systems supporting your government’s online counterterrorism efforts by detailing:
- The processes through which state actors discover terrorist activity online
- The systems used by state actors to discover terrorist activity online (including automated tooling and software)
Orders and requests
- The processes and systems state actors use to submit orders and requests to tech companies to action terrorist content and/or activity, or to demand user information, in accordance with specific legislation.
- The processes and systems through which state actors refer terrorist content and activity to tech companies for examination against tech company Terms of Service, including via Internet Referral Units (if applicable)
- The review your government or state actors carry out before sharing orders, requests, and ToS referrals to tech companies
- The type of content and data your government and/or state actors store and record following discovery of terrorist activity online
- The type of content and data your government and/or state actors store and record following requests and referrals to tech companies
- The processes through which your government supports companies in facilitating redress for content or activity that was wrongfully removed as a result of a government request.
Part C: Report
Provide data on your engagement with tech companies around online terrorist activity by
detailing your government’s:
Source of discovery
- Proactive discovery via the processes outlined in Part B
- Proactive discovery via the systems outlined in Part B
- Reports from the public (if relevant)
Information on content and activity discovered
- Total amount of content or activity discovered to violate the legislative framework
mentioned in Part A, broken down by:
o Source of discovery
o Type of violation
o Terrorist group or actor (and designation status)
Orders and requests made to tech companies (in numbers)
- Removal requests, broken down by:
o Type of violation
o Terrorist group or actor
- User information requests
o Broken down by company
Referrals made outside legal channels (in numbers)
- ToS Referrals
o Broken down by company
o Broken down by terrorist group or actor
- Appeals or other contestations made against government reports, segmented by
o Type of report (legal order or ToS referral)
o Success rate for each of the above
Tech against Terrorism provides an example report based on what reporting in accordance with the Guidelines could look like.