A Los Angeles courtroom is hosting what may become the most consequential legal challenge for Big Tech to date. The inflection point in the global debate over Big Tech liability: For the first time, an American jury is being asked to decide whether platform design itself can give rise to product liability – not because of what users post on these platforms, but because of how they were built.
The case is being waged by a 20-year-old woman in California identified by her initials, K.G.M. In furtherance of her case, she said she began using YouTube around age 6 and created an Instagram account at age 9, and alleges that the platforms’ design features – which include likes, algorithmic recommendation engines, infinite scroll, autoplay and deliberately unpredictable rewards – got her addicted. She maintains that her social media addiction fueled depression, anxiety, body dysmorphia, and suicidal thoughts.
TikTok and Snapchat settled with K.G.M. before trial for undisclosed sums, leaving Meta and Google as the remaining defendants. Meta CEO Mark Zuckerberg testified before the jury on February 18.
The stakes extend far beyond a single plaintiff, as K.G.M.’s case is a bellwether trial, with the court taking it on as a representative test case to help determine verdicts across an array of connected cases. Those cases involve approximately 1,600 plaintiffs, including more than 350 families and over 250 school districts. Their claims have been consolidated in a California Judicial Council Coordination Proceeding. The California proceeding shares legal teams and an evidence pool, including internal Meta documents, with a federal multidistrict litigation that is scheduled to advance in court later this year, bringing together thousands of federal lawsuits.
Legal innovation: Design as defect
For decades, Section 230 of the Communications Decency Act has shielded technology companies from liability for content that platform users post. Whenever people have sued over harms linked to social media, companies have routinely invoked Section 230, and the cases have typically be dismissed early.
But the K.G.M. litigation uses a different legal strategy: negligence-based product liability. The plaintiffs argue that the harm arises not from third-party content but from the platforms’ own engineering and design decisions, the “informational architecture,” and features that shape users’ experience of content. Infinite scrolling, autoplay, notifications calibrated to heighten anxiety and variable-reward systems operate on the same behavioral principles as slot machines.
These are conscious product design choices that the plaintiffs contend should be subject to the same safety obligations as any other manufactured product, thereby holding their makers accountable for negligence, strict liability, or breach of warranty of fitness. Judge Carolyn Kuhl of the California Superior Court agreed that these claims warranted a jury trial. In her November 2025 ruling denying Meta’s motion for summary judgment, the judge distinguished between features related to content publishing, which Section 230 might protect, and features – like notification timing, engagement loops and the absence of meaningful parental controls – which might not.
Here, Kuhl established that the conduct-versus-content distinction – treating algorithmic design choices as the company’s own conduct rather than as the protected publication of third-party speech – is a viable legal theory for a jury to evaluate. This fine-grained approach, evaluating each design feature individually and recognizing the increased complexities of technology products’ design, represents a potential road map for courts nationwide.
What the Companies Knew
The plaintiffs’ product liability theory depends partly on what the defendant companies knew about the risks of their designs. The 2021 leak of internal Meta documents, widely known as the “Facebook Papers,” revealed that the company’s own researchers had flagged concerns about Instagram’s effects on adolescent body image and mental health. Internal communications disclosed in the K.G.M. proceedings have included exchanges among Meta employees comparing the platform’s effects to pushing drugs and gambling. Whether this internal awareness constitutes the kind of corporate knowledge that supports liability is a central factual question for the jury to decide.
Not an entirely uncharted path, there is a clear analogy to tobacco litigation. In the 1990s, plaintiffs successfully took on tobacco companies by proving they had concealed evidence about the addictive and deadly nature of their products. In K.G.M., the plaintiffs are making the same core argument: Where there is corporate knowledge, deliberate targeting, and public denial, liability follows.
K.G.M.’s lead trial attorney, Mark Lanier, is the same lawyer who won multibillion-dollar verdicts in the Johnson & Johnson baby powder litigation, which signals the scale of accountability they are pursuing.
The Science: Contested but Consequential
The scientific evidence on social media and youth mental health is real but genuinely complex. The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) does not classify social media use as an addictive disorder. Researchers like Amy Orben have found that large-scale studies show small average associations between social media use and reduced well-being. Yet, Orben herself has cautioned that these averages might mask severe harms experienced by a subset of vulnerable young users, particularly girls ages 12 to 15.
The key legal question under the negligence theory is not whether social media harms everyone equally, but whether platform designers have an obligation to account for foreseeable interactions between their design features and the vulnerabilities of developing minds, especially when internal evidence suggested they were aware of the risks.
First, a manufacturer has a duty to exercise reasonable care in designing its product, and that duty extends to harms that are reasonably foreseeable. Second, a plaintiff must show that the type of injury suffered was a foreseeable consequence of the design choice. The manufacturer does not need to have foreseen the exact injury to the exact plaintiff, but the general category of harm must have been within the range of what a reasonable designer would anticipate.
This is why the Facebook Papers and internal Meta research are so legally significant in K.G.M.’s case: They go directly to establishing that the company’s own researchers identified the specific categories of harm – depression, body dysmorphia, compulsive use patterns among adolescent girls – that the plaintiff alleges she suffered. If the company’s own data flagged these risks and leadership continued on the same design trajectory, that would considerably strengthen the foreseeability element.
THE BIGGER PICTURE: Even if the science is unsettled, the legal and policy landscape is shifting fast. In 2025 alone, 20 states in the U.S. enacted new laws governing children’s social media use. And this wave is not limited to the U.S. Countries – such as the U.K., Australia, Denmark, France and Brazil – are also moving forward with specific legislation, including mandates banning social media for those under 16.
The K.G.M. trial represents something more fundamental: The proposition that algorithmic design decisions are product decisions, carrying real obligations of safety and accountability. If this framework takes hold, every platform will need to reconsider not just what content appears, but why and how it is delivered.
Carolina Rossini is a Professor of Practice and Director for Program, Public Interest Technology Initiative at UMass Amherst.



