Policy Brief – Dehumanisation

Why focus on Dehumanisation?

AMAN has made a series of policy proposals to Governments, tech platforms, regulators and fellow civil society.

Defining and identifying hate speech at scale comes with challenges. Our proposals aim to cut through these challenges and focus on some of the most potent vectors of harm.

We are determined to have anti-Muslim hateful stereotypes and narratives acknowledged and addressed by major platforms. We explain how Islam can be used as a proxy to dehumanise and attack Muslims.

Our proposals are designed to be applied across the board to help all humanity, and are grounded in human rights.

Dehumanisation means

An actor that serially or systematically produces or publishes material, which an ordinary person would conclude,

(a) presents the class of persons identified on the basis of a protected characteristic to have the appearance, qualities, or behaviour of an animal, insect, filth, form of disease or bacteria, inanimate or mechanical objects, or a supernaturally alien. This material would include words, images, and/or insignia.(“Dehumanising language”); or

(b) curates information to a specific audience to cumulatively portray that the class of persons identified on the basis of a protected characteristic
(i) are polluting, despoiling, or debilitating an ingroup;
(ii) have a diminished capacity for human warmth and feeling or independent thought;

(iii) pose a powerful threat or menace to an ingroup; or
(iv) are to be held responsible for and deserving of collective punishment for the specific crimes, or alleged crimes of some of their “members” (“Dehumanising discourse”)

How did we develop this working definition?

AMAN developed this working definition after spearheading a study of five information operations online (Abdalla, Ally and Jabri-Markwell, 2021). The first iteration of this definition was published in a joint paper with UQ researchers (Risius et al, 2021).

The three categories of dehumanising speech in Clause (a) are drawn from Maynard and Benesch (80), and fleshed out with further examples from tech company policies (refer to Meta for example).

Subclauses (b)(i) is derived from Maynard and Benesch (80).

Subclause (b)(ii) is derived from Haslam (258).

Subclauses (iii) and (iv) are elements of dangerous speech that Maynard and Benesch refer to as ‘threat construction’ and ‘guilt attribution’ respectively (81). However, Abdalla, Ally and Jabri-Markwell’s work shows how such conceptions are also dehumanising, as they assume a group operates with a single mindset, lacking independent thought or human depth (using Haslam’s definition), and combine with ideas that Muslims are inherently violent, barbaric, savage, or plan to infiltrate, flood, reproduce and replace (like disease, vermin)(15). The same study found that the melding and flattening of Muslim identities behind a threat narrative through headlines overtime was a dehumanisation method (17). Demographic invasion theory based meme (9) or headlines that provided ‘proof’ for such theory (20) also elicited explicit dehumanising speech from audiences.

Maynard and Benesch write, ‘Like guilt attribution and threat construction, dehumanization moves out-group members into a social category in which conventional moral restraints on how people can be treated do not seem to apply’ (80).

Updated 24 October 2022

Actions we can take

Evidence to support this action