IMPORTANT: The Terrorist Content Analytics Platform (TCAP) Incident Response Policy is NOT yet operational. Below, we provide an outline of what the purpose and scope of the policy is. Please stay tuned for further announcements, with the full policy to be published here in due course.

Key Objective

The TCAP’s Incident Response mechanism[1] aims to minimise the impact of terrorist attacks by inhibiting the spread and potential virality of attacker-produced content online, and by disrupting hostile responses of terrorist and violent extremist networks on the internet. Our priority is on enabling the rapid and targeted disruption of terrorist content on smaller platforms.


Terrorist and violent extremist entities have long exploited internet services to increase the impact of terrorist acts, such as through livestreaming acts of violence or sharing “manifestos” or other content inciting acts of violence or detailing and justifying their motives. This “attacker-produced content” has the potential to spread rapidly across the internet in the immediate aftermath of a terrorist or violent extremist attack, increasing the risk that individuals are exposed to potentially radicalising or traumatising content.[2]

In particular, smaller platforms are disproportionately exploited by violent extremist and terrorist actors, and do not have the same capacity as larger platforms to respond quickly to evolving, ongoing threats. Furthermore, to date, smaller platforms are not captured in any pre-existing crisis response mechanisms, which means they do not receive alerts for content pertaining to an ongoing terrorist or violent extremist attack. This presents a significant gap in the threat landscape.

To address the risk of attacker-produced content spreading virally, and smaller platform exploitation, Tech Against Terrorism has developed an Incident Response policy that outlines our processes in combatting the online elements of these attacks via the Terrorist Content Analytics Platform (TCAP).

This policy directly meets and advances the Christchurch Call to Action commitments to (1) support smaller platforms as they build capacity to remove terrorist and violent extremist content; and (2) develop processes allowing online service providers to respond rapidly, effectively and in a coordinated manner to the dissemination of terrorist or violent extremist content following a terrorist event. Tech Against Terrorism will coordinate and collaborate with the Christchurch Call to Action and other partners to ensure this policy complements other crisis response mechanisms. In particular, this policy will be reviewed following each activation to determine the efficacy and efficiency of our processes, including its coordination with other existing mechanisms for responding to a terrorist or violent extremist act of violence. These include the Christchurch Call to Action’s Crisis Response Protocol, the EU Crisis Protocol, and the Global Internet Forum to Counter Terrorism’s (GIFCT) Content Incident Protocol.

The Tech Against Terrorism Incident Response policy is intended to complement these protocols by providing a solution to a key gap in the current crisis response landscape: alerting, supporting, and engaging with smaller online platforms.


The process for the Incident Response policy is as outlined below. Please note, this is our proposed workflow for incident response but that our ability to effectively alert third-party and traumatic content is dependent on expanding our open-source intelligence capacity.



Event: An act of real-world violence that is imminent, ongoing, or recently concluded carried out by a non-state actor with the intent to endanger, cause death or serious bodily harm to a person[s] and is likely motivated by ideological, political or religious goals.

Incident (trigger): An event in which content produced by the perpetrator(s) is shared on and circulating across online platforms depicting, justifying, or amplifying their actions or motivations and/or inciting others to commit acts of violence. This online component of an act of violence, as defined by ‘event’, is the trigger to activate the Incident Response mechanism.

Attacker-produced content: Any content produced by the perpetrator(s) of the attack circulating online in the time period surrounding the incident (pre-, during, post) that depicts, justifies, or amplifies their actions or motivations and/or incites others to commit acts of violence. In the past, this has typically been a livestream, manifesto, or pledge video.

Third-party content: Content depicting the incident that was not originally produced by or in support of the perpetrator(s).

Traumatic content: Content that violates the dignity of victims through displaying graphic violence towards the victim(s) of a violent event.

[1] We use the term mechanism to describe the set of processes and systems which are activated in response to an ‘Incident’.

[2] Research reveals the radicalising impact exposure to attacker-produced content can have. See for example