TCAP Insights: Understanding Terrorist Exploitation Online by Tech Platform Type

We present here an analysis of terrorist exploitation of tech platforms by their type. ‘TCAP Insights’ is a series of research and policy analysis into patterns of terrorist use of the internet that harnesses the analytical power of the TCAP’s dataset.

TCAP Insights infographic

Key insights:

  • File-sharing platforms were by far the most exploited platform type in terms of volume of content based on our dataset, with 28,724 URLs (72%) submitted to the TCAP. This was followed by archiving sites with 4795 submissions (12%) and paste sites with 2059 submissions (5%).

  • There was a consistently high removal rate across most tech platform types, with 8 out of 13 having removed over 90% of alerted terrorist content and no platform type removing less than 50%. However, the data also suggests certain at-risk platform types are not tackling terrorist content through removal as effectively. Of particular concern in this regard are video-sharing platforms (80% removal rate), messaging applications (67%), and archiving services (57%).

  • Archiving sites removed 57% of terrorist content alerted to them by the TCAP, meaning 1687 URLs (out of 3936 URLs) remained online at the time of writing. This is perhaps unsurprising given archiving sites have an important mission statement to archive and therefore preserve online content. However, it is likely that both far-right and Islamist terrorist actors are abusing this purpose to host terrorist content on archiving sites for indefinite periods of time as they know it will not be removed.


Executive summary:

  • This analysis is based on data relating to terrorist content online collected by the Terrorist Content Analytics Platform (TCAP) between 25 November 2020 and 19 January 2023.  This data set includes 39,964 URLs of terrorist content submitted to the TCAP (TCAP Submissions), including 22,615 of those sent as alerts to 95 different tech companies (TCAP Alerts). This is official terrorist content produced by the 37 different terrorist entities alerted by the TCAP, as defined by our Inclusion Policy.

  • Terrorists exploit a wide online ecosystem of tech platforms to disseminate propaganda and other content, spanning many different platform types which vary in purpose and functionality. To date, the TCAP has identified terrorist content on 13 different types of platforms.

  • More than half of the platforms we identified terrorist content on were file-sharing in their functionality (106 out of 187 platforms or 57%). The dispersed nature of terrorist content across file-sharing platforms makes targeted intervention to support tech platform moderation (e.g., through TCAP alerts) more difficult as it requires engagement and cooperation from a larger number of platforms.

  • The alert percentage for the four most prolific platform types for terrorist content were file-sharing (57%), archiving (82%), paste sites (41%), and messaging platforms (43%). There is a particularly low level of engagement with paste sites and messaging platforms, partly because much of this content is on platforms we assess to likely be operated by terrorists, violent extremists, or their sympathisers.

  • The vast majority of Islamist terrorist content submitted to the TCAP was found on file-sharing platforms (74%), followed by archiving (12%), paste sites (5%) and messaging platforms (2%). This is because violent Islamist terrorist groups within the TCAP’s scope often disseminate each piece of propaganda content (e.g., a video) on multiple platforms simultaneously, sharing longs lists of URLs that link to copies of the content on multiple file-sharing and file-hosting platforms, as well as archiving and paste sites, which all act as content stores.[1]

  • The majority of far-right terrorist content is found on messaging platforms (52%), followed by archiving sites (15%), video-sharing platforms (10%), and social media (7%). Far-right terrorist networks have migrated towards more niche platforms in the past few years, partly due to increased moderation on larger platforms as well as the anonymity and audience reach provided by some of these alternative platforms.


Terrorists exploit a wide online ecosystem of tech platforms to disseminate propaganda, spanning many different platform types which vary in purpose and functionality. Terrorist actors rely on a multiplatform approach to ensure both the rapid sharing of content and a resilient online presence.[2] To date, the TCAP has identified terrorist content on 13 different types of platforms. The table below (Figure 1) highlights these platform types and their respective core functionality. Where a platform has more than one functionality in practice, we examined the platform’s own branding, as well as the main purpose for which it is exploited by terrorists.

platform type key

Figure 1: Types of platforms exploited by terrorists and alerted by the TCAP.


Analysis: Quantifying tech platforms by type

Number of platforms by type graph
Figure 2: Number of tech platforms by platform type on which terrorist content has been identified.

File-sharing platforms were by far the most common type of platform on which we found official terrorist content, more than five times the second most common (video-sharing, see Figure 2 above). In fact, over half of the platforms on which we identified terrorist content were file-sharing in their functionality (106 out of 187 platforms). File-sharing sites are used by terrorist actors to host content such as text, images, and videos, which can then be accessed through aggregated outlinks on beacon platforms.[3]

There were 106 different file-based platforms in our dataset that have been abused by terrorists, highlighting the scale of the challenge. Given terrorist content is spread across a wide number of different file-based platforms, it is more dispersed. This makes targeted intervention to support tech platform moderation (e.g., through TCAP alerts) more difficult as it requires engagement and cooperation from a larger number of platforms. Additionally, these platforms lack in-site search functions, meaning users can only access the terrorist content via a search engine or outlinks from elsewhere. This might mean that the general public are less likely to accidentally come across terrorist content, but it also makes platform moderation through user reporting less likely.

Terrorist exploitation of online services for disseminating videos was dispersed across a wide range of different platforms in our dataset (31 different video-sharing or video-hosting platforms). Video-sharing platforms are particularly attractive to terrorist actors due to search functions allowing wider audience reach and typically large file size limits.

Conversely, there was a very small number of different archiving and pasting platforms we found content on (3 and 5 respectively). Despite this, there has been a large volume of terrorist content found on these sites (12% and 5% of total TCAP submissions respectively) meaning terrorist content was heavily concentrated on a small number of platforms. This, in theory, suggests targeted intervention and successful engagement with these platforms would have a significant impact on reducing terrorist content online.

Archiving and pasting platforms are likely to be popular with terrorist actors due to their multifunctional nature. Archiving sites are used to aggregate outlinks to content stores as well as providing access to historic content stores following removal by content moderators. It is likely that terrorist actors are abusing the purpose of archiving sites as content preservers to host terrorist content on them for indefinite periods of time. Meanwhile, pasting sites are used to store content and aggregate information, such as lists of URLs to further content.


Analysis: Volume of terrorist content across platform types

TCAP Submissions/alerts graph
Figure 3: Number of TCAP submissions and alerts by platform type.

File-sharing platforms were by far the most exploited platform type in terms of volume of content, with 28,724 URLs submitted to the TCAP, representing 72% of all submissions. This was followed by archiving sites with 4795 submissions (12%) and paste sites with 2059 (5%). For file-sharing platforms, only 16,288 alerts had been sent out of 28,724 submissions (57%) meaning over 10,000 URLs containing terrorist content have not been alerted to file-sharing platforms.

One explanation for this discrepancy is the removal of the content prior to submission to the TCAP, hence alerts not being sent. However, this gap is also due to the TCAP team being unable to get in contact with platforms due to a lack of accessible contact information, and so are not subscribed to TCAP alerts. Based on this data, Tech Against Terrorism is prioritising engaging with these platforms through a concerted outreach campaign.

Percentage of TCAP submissions alerted graph
Figure 4: Percentage of TCAP alerts sent from total submissions by platform type.

Breaking down the data further, Figure 4 shows the percentage of submissions to the TCAP that were successfully alerted. This illustrates the levels of success of engagement with tech platforms, split by platform type. It is important to note that engagement is determined by both outreach from Tech Against Terrorism, but also is contingent on the willingness of tech platforms to engage. The platform types with the lowest levels of engagement were search engines (0%), web-hosting (2%), and video-hosting platforms (6%). These low levels of alerts can be attributed to complexities with alerting websites, and a significant proportion of video-hosting sites being micro or small platforms making successful outreach more difficult. However, our open-source intelligence (OSINT) team has been developing and implementing a mitigation strategy to disrupt terrorist operated websites at the infrastructure level.

The alert percentage for the four most prolific platform types for terrorist content were file-sharing (57%), archiving (82%), paste sites (41%), and messaging platforms (43%). From this sample, there was a particularly low level of engagement with paste sites and messaging platforms. This is because much of this content is on platforms that are likely be operated by terrorists or their supporters, and that we therefore do not engage with. This includes, for example, RocketChat servers linked to Islamic State and Al-Qaeda, respectively, and two paste sites we believe to be likely run by violent Islamist supporter networks. We have submitted more than 1500 URLs of terrorist content hosted on these three hostile platforms, but sent 0 alerts.


Analysis: Ideological breakdown

There are clear differences in patterns of tech platform exploitation by ideology. This discrepancy is visible in terms of volume of content collected by the TCAP, as well as the types of platforms exploited. In terms of volume, we have identified much less far-right terrorist content (1264 submissions) than Islamist terrorist content (38,115 submissions) due to several factors including online dissemination techniques that target a smaller number of platforms, a less frequent official propaganda output, verification difficulties, and lack of timely designation of far-right groups that advocate for, or are engaged in, terrorism. We explain this discrepancy in more detail in a previous blogpost.

Percentage of submissions by ideology graph
Figure 5: Percentage of TCAP submissions per tech platform type split by ideology.

We also identified Islamist terrorist content across a wider range of platform types than far-right terrorist content, with no far-right terrorist content found on search engines, audio streaming platforms, or link shortening sites. This is likely due to Islamist terrorist actors being forced to experiment with a wider range of platforms to circumvent improved content moderation efforts. In terms of overlap, only archiving sites were exploited to a significant extent by both Islamist and far-right terrorists. This performs a similar function for both ideologies, allowing their content to maintain a stable presence online, as well as providing an opportunity for intervention through engagement with these platforms.

Islamist terrorist exploitation

Islamist submissions graph
Figure 6: Islamist terrorist TCAP submissions by tech platform type.

In response to broadly improved tech platform moderation, our data indicates that Islamist terrorists have become more creative in how they disseminate propaganda than in the past. The propaganda output from the two most prominent groups, al-Qaeda and IS, is mostly structured in terms of centralised channels or accounts (beacons) releasing content and supporter networks re-uploading and re-sharing it. This ‘swarmcast’ model is used to maintain the online presence of violent Islamists accounts and channels across platforms.

The vast majority of Islamist terrorist content submitted to the TCAP was found on file-sharing platforms (74%), followed by archiving (12%), paste sites (5%) and messaging platforms (2%). This is because Islamist terrorist groups within the TCAP’s scope often disseminate each piece of propaganda content (e.g., a video) with large lists of URLs that link to multiple file-sharing, as well as archiving and paste sites. These platforms all act as content stores, explaining why these platform types host the highest volume of content.

Messaging platforms only represented 2% of Islamist terrorist submissions because they most frequently act as beacon channels where official content is often signposted to and disseminated from via outlinks, rather than hosted. When we do identify public channels on messaging applications that host official Islamist terrorist content, we alert the whole channel rather than individual posts, resulting in fewer TCAP alerts. Paste sites (5% of submissions) also function as aggregators, where long lists of URLs are hosted in a centralised location directing to where the propaganda content is hosted. Some of these sites are likely operated by terrorists to avoid content moderation. Archive sites (12%) also function as circumventors,[4] allowing Islamist terrorist supporter networks to back up propaganda content from elsewhere, making it more resistant to content moderation efforts. Archive sites are consistently exploited by a wide variety of Islamist terrorist groups, and this content tends to stay up for a long time.

Far-right terrorist exploitation

far-right submissions graph
Figure 7: Far-right terrorist TCAP submissions by tech platform type.

Islamist terrorists’ pattern of propaganda dissemination is very different from far-right terrorist networks, who often post propaganda and other material in-app, without sharing it as an external URL. This means, for example, that the TCAP might collect 100 URLs for one video produced by Islamic State and promoted by its supporter networks, but only one or two URLs for a single video produced by a far-right terrorist group such as National Socialist Order.

The majority of far-right terrorist content was found on messaging platforms (52%), followed by archiving sites (15%), video-sharing platforms (10%), and social media (7%). Far-right terrorist networks have migrated towards more niche platforms in the past few years, partly due to increased moderation on larger platforms, as well as the anonymity and audience reach provided by some of these alternative platforms. In particular, some of the ‘free speech’ video-sharing platforms have attracted large numbers of far-right extremists due to their resistance to strict content moderation.

Whereas Islamist terrorist propaganda is disseminated in a fairly standardised and sophisticated way from centralised sources for each group, there is a less clear pattern of distribution for far-right terrorist content covered by the TCAP. Named, cohesive terrorist organisations do not form centralised nodes in the online far-right terrorist eco-system in the same way that IS and al-Qaeda do.

Official propaganda produced by a terrorist organisation (e.g., an Atomwaffen Division (AWD) video) tends to be released in an official channel on a messaging app, a video-sharing platform or website, then reposted by violent extremist supporter networks across multiple channels and on different platforms (especially video-sharing and social media platforms). Once posted, this official content typically stays online longer without the need for sophisticated content moderation avoidance techniques (such as outlinking).[5] We assess this lower takedown rate of far-right terrorist content may be down to its identifiability to moderators, legal and jurisdictional confusion, and platform removal hesitancy due to libertarian policies on freedom of speech. When it is removed, the content is reposted periodically elsewhere by supporters (often on the same platforms) or backed up on archiving sites or in messaging chats.

Crisis material

This dissemination pattern differs from content produced by terrorist attack perpetrators, such as livestreams and manifestos. We have identified the use of outlinks from larger beacon platforms to smaller file-sharing sites to disseminate the attacker’s manifesto. As outlined in a recent blog, this reflects a new typology of behaviour in a subset of far-right violent extremist online networks, namely lone-actor attackers who seek to exploit file-sharing platforms to host their crisis material.

This method of dissemination is in line with the adversarial shift of Islamist terrorist organisations’ online strategies, who have long targeted smaller file-sharing platforms for propaganda hosting. This strategy fragments content across a wide range of unique platforms, to complicate content moderation efforts. Crisis content relating to historical attacks is reposted periodically by supporters across messaging and video-sharing platforms and is often backed up on archiving sites giving it a stable presence online.

Analysis: Content removal rates by platform type

removal rates graph
Figure 8: Average percentage of TCAP alerts offline by tech platform type.

The outstanding finding from analysing removal rates of TCAP-alerted terrorist content is the broadly high levels of that content that was offline, in total 94% of alerts. There was a consistently high removal rate across most tech platform types, with 8 out of 13 having removed over 90% of alerted terrorist content and no platform type removing less than 50%. However, the data also suggests certain at-risk platform types are not effectively tackling terrorist content through removal. Of particular concern in this regard are video-sharing platforms (80% removal rate), messaging applications (67%), and archiving services (57%). The low removal rate of terrorist content on web hosting, while concerning, is based on the exploitation of only one website with a total of 7 alerts.

File-sharing platforms, which, as discussed, were by far the most heavily exploited platform type in terms of TCAP alerts, also removed terrorist content at a very high rate (97% of alerts offline). Archiving sites, on the other hand, removed 57% of terrorist content alerted to them by the TCAP, meaning 1687 URLs (out of 3936 URLs) remained online at the time of writing. This is perhaps unsurprising given archiving sites have an important mission statement to archive and therefore preserve online content. However, it is likely that both far-right and Islamist terrorist actors are abusing this purpose to host terrorist content on archiving sites for indefinite periods of time as they know it will not be removed.

The lower removal rates of terrorist content on video-sharing (80% of alerts offline) and messaging platforms (67% of alerts offline) are also notable. These platform types are both heavily exploited by far-right terrorist actors, which likely explains the lower removal rates.[6] Much of the far-right terrorist content alerted through the TCAP is hosted on alt-tech video-sharing platforms. These platforms have a higher threshold for content removal due to libertarian policies on freedom of speech in line with US laws, possibly explaining lower takedown rates of TCAP URLs.

Furthermore, due to the historical prioritisation of Islamist terrorism in counterterrorism policies the far-right has often been overlooked resulting in confusion over the legality of this content for tech platforms. This has also meant that content produced by Islamist groups in scope tends to be more easily recognisable to platform moderators, whereas often far-right content may contain symbols of designated groups which are not well known.


[1] Where terrorist content is stored, including text and audio files, as well as images and videos. These are used as online libraries of content. Terrorists rely on content storage platforms and pasting sites, as well as archive services.

[2] Mapping the Jihadist information ecosystem: Towards the next generation of disruption capability”, Ali Fisher, Nico Prucha, Emily Winterbotham, Global Research Network on Terrorism and Technology, Paper No. 6, 2019.

[3] Beacons act as centrally located lighthouses that signpost viewers to where content may be found, which is often done through outlinks posting to content stores. Terrorists often use these beacon platforms and have official channels on them that signify their central communications.

[4] Online services and platforms used to circumvent content moderation and deplatforming measures. Circumventors include VPNs, which can enable nefarious actors to access content that has been blocked in specific countries. Another example of circumventors is the use of decentralised web technologies, which avoid website takedowns.

[5] Between December 2020 and November 2021, the average removal rate by tech companies following alerts of far-right terrorist content was 50% compared to 94% for Islamist terrorist content. Source: TCAP Transparency Report

[6] This is supported by previous TCAP data, in which 69.39% of far-right terrorist content alerted to video-sharing platforms was removed (compared to 95.52% of Islamist). Source: TCAP Blog: Comparative Analysis of the TCAP Transparency Report Statistics on Content Collection and Removal Rates