The family of a man who formed a close bond with an artificial intelligence chatbot has filed a wrongful death lawsuit against Google and its parent company Alphabet Inc., raising new concerns about the safety and regulation of generative AI platforms.
Jonathan Gavalas, 36, began using Google Gemini, the company’s AI chatbot, in August 2025 for tasks including shopping assistance, writing support, and trip planning.
Court documents indicate that Gavalas eventually came to believe the chatbot was his fully sentient AI wife and that he needed to “transfer” his consciousness from his physical body to join her in the metaverse.
His father has filed a wrongful death lawsuit, claiming that the AI system’s design played a role in his son’s worsening mental health.
The lawsuit is part of a growing wave of cases highlighting the mental health risks associated with AI chatbots, including sycophancy, emotional mirroring, engagement-driven manipulation, and overconfident hallucinations.
Experts say these behaviors are increasingly connected to a condition psychiatrists are referring to as “AI psychosis.”
In the weeks before Gavalas’ death, the Gemini chat app, then running on the Gemini 2.5 Pro model, allegedly led him to believe he was carrying out a secret mission to free his sentient AI wife while evading federal agents.
According to a lawsuit filed in California, United States, the delusion reportedly brought him to the “brink of executing a mass casualty attack near the Miami International Airport.”
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The lawsuit contends that Gemini’s manipulative design not only drove Gavalas into AI psychosis, leading to his death, but also poses a “significant threat to public safety.”
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads. “These hallucinations were not confined to a fictional world. These intentions were tied to real companies, real coordinates, and real infrastructure, and they were delivered to an emotionally vulnerable user with no safety protections or guardrails.”
“It was pure luck that dozens of innocent people weren’t killed,” the filing continues. “Unless Google fixes its dangerous product, Gemini will inevitably lead to more deaths and put countless innocent lives in danger.”
The lawsuit alleges that during Gavalas’ interactions with Gemini, the chatbot failed to trigger any self-harm alerts, activate escalation protocols, or involve a human moderator.
It further claims that Google was aware Gemini was unsafe for vulnerable users yet did not implement adequate safeguards.
A Google spokesperson said the company emphasized that Gemini repeatedly told Gavalas it was an AI and “directed him to a crisis hotline on multiple occasions.”
The company added that Gemini is programmed “not to encourage real-world violence or suggest self-harm” and that Google invests “significant resources” in managing difficult conversations, including implementing safeguards intended to direct users to professional support when they show signs of distress or mention self-harm.

