People in New Zealand who display violent extremist tendencies on ChatGPT will be guided to human and chatbot‑based deradicalisation support via a new tool currently under development, its creators said.
The initiative is the latest effort to tackle safety concerns as AI companies face a rising number of lawsuits alleging they failed to prevent, or even facilitated—violent behaviour, according to Reuters.
In February, OpenAI faced potential government intervention in Canada after it emerged that a person who committed a deadly school shooting had been banned from the platform, yet authorities were not notified.
ThroughLine, a startup recently contracted by ChatGPT owner OpenAI as well as rivals Anthropic and Google to direct users flagged as at risk of self-harm, domestic violence, or eating disorders to crisis support, is now exploring ways to expand its services to prevent violent extremism, founder and former youth worker Elliot Taylor said.
The company is in talks with The Christchurch Call—an initiative created after New Zealand’s deadliest terrorist attack in 2019 to combat online hate—where the anti-extremism group would provide guidance as ThroughLine develops the intervention chatbot, Taylor added.
“It’s something that we’d like to move toward and to do a better job of covering and then to be able to better support platforms,” Taylor said in an interview.
OpenAI confirmed its partnership with ThroughLine but declined to provide further details. Anthropic and Google did not immediately respond to requests for comment.
When the AI identifies signs of a potential mental health crisis, it directs the user to ThroughLine, which connects them with an available human-run service nearby.
However, ThroughLine’s focus has so far been limited to specific categories, the founder said.
With the growing use of AI chatbots, the range of mental health issues people share online has expanded—and now includes encounters with extremist content, he added.

