Meta Platforms Inc. and Google, the dominant players in the United States digital advertising market, are facing mounting legal challenges that could test the scope of their long-standing liability protections for user-generated content.
The two tech giants, alongside rivals such as TikTok and Snap, are defendants in a series of lawsuits designed to sidestep Section 230 of the Communications Decency Act.
The 1996 law, signed by then-President Bill Clinton, shields online platforms from being held liable for content posted by users while allowing them to moderate material without assuming publisher responsibility.
However, recent court rulings suggest that plaintiffs are finding new legal pathways around those protections.
The growing wave of litigation underscores increasing legal and regulatory pressure on major technology companies, raising questions about the durability of Section 230 protections and the potential financial and operational implications for the digital advertising industry.
Last week, a jury in New Mexico found Meta liable in a child safety case, while a separate jury in Los Angeles ruled that the Facebook parent and Google’s YouTube were negligent in a personal injury trial.
Shortly after the verdicts were made public, victims of convicted sex offender Jeffrey Epstein lodged a class action suit against Google and the Trump administration, accusing them of improperly disclosing personal information.
In their filing, the plaintiffs assert that Google’s AI Mode, which produces AI-generated summaries alongside links, functions as more than a neutral search tool.
They argue that the feature represents active content generation, strengthening their claim that Google should not be shielded by conventional platform liability protections.
Last week’s verdict against Meta and YouTube marked the first time a jury held social media platforms liable over claims that they intentionally designed their products to foster addiction among minors.
Rather than focusing solely on user-generated content, the case targeted the architecture of the platforms themselves.
Plaintiffs argued that features such as autoplay, recommendation algorithms, push notifications and certain visual filters operated like “digital casinos,” contributing to severe mental health challenges for a teenage girl who said she was unable to disengage from the apps.
In March, the father of Jonathan Gavalas filed a lawsuit against Google, alleging that the company’s Gemini chatbot encouraged his son to carry out a series of harmful “missions,” including staging a so-called catastrophic accident. The complaint claims the chatbot’s interactions contributed to the teenager’s death.
In January, Google reached a settlement with families who had sued the company and Character.AI, alleging their technologies caused harm to minors, including cases involving self-harm. Separately, last year OpenAI was sued by a family that alleged its ChatGPT chatbot played a role in their teenage son’s death.
Experts say appeals in the latest cases could ultimately reach the U.S. Supreme Court, where justices may decide whether the companies are entitled to legal protections shielding them from such claims.

