Panelists and audience in CTED’s insight briefing.
On 17 October 2023, the Counter-Terrorism Committee Executive Directorate (CTED) held an insight briefing titled “Artificial intelligence and the potential risks and opportunities in the context of terrorism and counter-terrorism”. At the briefing, representatives of the technology sector shared their perspectives on some of the challenges and opportunities related to terrorist misuse of information and communications technologies (ICT), including of artificial intelligence (AI) specifically.
In his opening remarks, David Scharia, Director and Chief of Branch in CTED, stated that while AI was not yet seen as a major imminent threat, the Counter-Terrorism Committee of the United Nations Security Council was closely monitoring misuse of AI by terrorist organisations.
While access to AI tools, particularly generative AI, has become more widespread, generative AI tends to mimic human content creation and is used by platforms such as ChatGPT. This technology can also be misused in terms of creating and disseminating terrorist narratives and disinformation online. In parallel, technology companies have been using AI to identify terrorist content and stop it from spreading online, as well as utilizing AI for moderation purposes.
The panel, made up of three technology practitioners — Mr. Tom Thorley with the Global Internet Forum to Counter Terrorism (GIFCT), Mr. Priyank Mathur who is the Founder and CEO of Mythos Labs, and Ms. Dina Hussein with Meta — gave an overview of the ability of terrorists to tap into and misuse AI and outlined what their entities were doing to address potential threats.
The speakers noted that while AI tools were not flawless, they were continually improving and could prove particularly useful for smaller platforms that might otherwise not have the capacity to moderate their content effectively. The discussion on moderation additionally raised questions regarding the role of AI versus humans in decision-making in online content regulation, and some of the issues that can arise from this. It was pointed out that AI models currently had limitations and biases that required mitigation and safety measures, including robust review processes, informed policies, and prevention mechanisms.
With regards to measures that could help mitigate some of the related challenges, the speakers suggested using a combination of data-driven counter-narratives, improving social media literacy, and raising AI awareness among policymakers and the public. They also emphasized that the key to addressing terrorist misuse of AI was to strike a balance between recognizing and understanding the potential challenges stemming from AI but also the opportunities.
Additional photos from the briefing are available at CTED’s dedicated .