On March 24, 2026, a Santa Fe jury delivered what may be the most consequential verdict in the history of social media regulation. After nearly seven weeks of trial, jurors found Meta Platforms liable on every count brought by the state of New Mexico — ruling that the company willfully engaged in unfair and deceptive trade practices and knowingly designed its platforms in ways that harmed children. The penalty: $375 million, the maximum allowed under state law at $5,000 per violation.

New Mexico becomes the first state in the nation to prevail at trial against a major tech company for harming young people.

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” said New Mexico Attorney General Raúl Torrez. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew.”

How It Got to This Point

The case traces back to 2023, when Torrez’s office launched an undercover investigation that would form the backbone of the prosecution’s case. State agents created fake social media profiles posing as users younger than 14 on Facebook and Instagram. The results were immediate and disturbing — the decoy accounts were, in Torrez’s own words, “simply inundated with images and targeted solicitations” from child abusers.

That investigation, combined with a trove of internal Meta documents and testimony from former employees, painted a picture the state argued was damning: that Meta’s leadership knew about these dangers, received repeated warnings from their own engineers and child safety experts, and chose to do nothing that would meaningfully reduce them.

The trial examined internal Meta correspondence, deposition recordings of CEO Mark Zuckerberg, and testimony from platform engineers, whistleblowers, psychiatric experts, tech safety consultants, and local school educators who described the real-world fallout: sextortion schemes, self-harm content, and mental health crises tied directly to social media exposure.

The Zuckerberg Deposition

Perhaps the most revealing moments came from a recorded deposition of Zuckerberg himself. When asked directly whether his platforms were addictive, Zuckerberg described internal research on the question as “inconclusive” — a claim the state immediately countered by pointing to Meta’s own researchers, who found that specific product features were deliberately engineered to produce dopamine responses and maximize time-on-app.

When asked whether, as a parent, he had a right to know if a product his child was using was addictive, Zuckerberg responded that there was a lot to “unpack in that.” He added that he and his wife personally vet products before giving them to their own children — who, he noted, are “younger.”

Former Meta Vice President of Partnerships Brian Boland testified that he “absolutely did not believe that safety was a priority” to Zuckerberg and then-COO Sheryl Sandberg when he left the company in 2020.

The Whistleblower Who Hit Closest to Home

Among the most damaging testimony came from Arturo Béjar, a former engineering and product leader who spent six years at Meta. Béjar told the court about his personal experience: his own 14-year-old daughter had received unwanted sexual advances on Instagram, prompting him to raise alarms internally. He was largely ignored.

His characterization of Meta’s recommendation algorithms was blunt: “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls.”

The Encryption Revelation

Internal Meta messages, revealed by New Mexico prosecutors, showed that Zuckerberg’s 2019 decision to enable end-to-end encryption by default on Facebook Messenger raised immediate red flags internally — specifically around the projected impact on roughly 7.5 million child sexual abuse material (CSAM) reports that would otherwise have been disclosed to law enforcement.

Meta announced midway through the trial that it would stop supporting end-to-end encrypted messaging on Instagram later this year. The company’s explanation: “very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option.” Few observers accepted that framing at face value.

What the Jury Decided

The jury found liability on both counts under New Mexico’s Unfair Practices Act. Their determinations included:

  • Meta made false or misleading public statements about platform safety
  • Meta engaged in unconscionable trade practices exploiting children’s vulnerabilities
  • Meta failed to enforce its own ban on users under 13
  • Meta’s algorithms prioritized sensational or harmful content
  • Meta concealed what it knew about child sexual exploitation on its platforms

The $375 million verdict is actually less than one-fifth of the roughly $2.1 billion the state sought. Meta’s stock ticked up 5% in after-hours trading — a telling signal that Wall Street doesn’t consider this a bet-the-company outcome for a $1.5 trillion enterprise. But as legal analysts noted, this is the first jury verdict of its kind.

Phase Two Begins May 4

A bench trial beginning May 4 will address public nuisance claims and could result in additional financial penalties as well as court-mandated changes to Meta’s platforms. New Mexico is seeking mandatory age verification systems, removal of identified predators, and protections for minors from encrypted communications that shield bad actors from law enforcement detection. Injunctive relief — court orders forcing operational and design changes — would have nationwide implications.

The Section 230 Workaround

New Mexico’s legal strategy was precisely designed to circumvent Section 230 by focusing not on the content itself, but on the design of the platforms. By framing the case around Meta’s algorithmic choices, product features, and deliberate design decisions rather than third-party content, Torrez’s office was able to overcome Section 230 motions entirely. That legal architecture is now a tested template. Other states, prosecutors, and plaintiffs’ attorneys will study this case closely.

Compliance and Risk Implications

From a governance and compliance standpoint, this verdict raises immediate questions for any organization that operates consumer-facing digital products used by minors:

Risk documentation is a double-edged sword. Internal memos, Slack messages, and safety team reports that showed Meta’s awareness of child exploitation risks became prosecution exhibits. Organizations that conduct rigorous internal risk assessments should understand that those documents can surface in litigation.

“We have safety features” is not a defense. Meta’s attorneys repeatedly pointed to investments in content moderation and safety tooling. The jury rejected this framing. The standard being applied isn’t whether a company has safety programs — it’s whether those programs are effective, and whether leadership takes internal warnings seriously.

Design choices carry legal liability. The shift away from Section 230 toward product design liability is the most significant legal development in this space in a generation. Algorithms, recommendation engines, default settings, and age verification gaps are now potential sources of civil — and potentially criminal — exposure.

Encryption decisions have safety implications. The evidence around Meta’s end-to-end encryption rollout and its suppression of CSAM reporting to law enforcement underscores that privacy-enhancing technology can have serious child safety tradeoffs. Organizations making encryption decisions need to understand and document those tradeoffs explicitly.

Legal analysts have increasingly drawn parallels between this wave of social media litigation and the landmark tobacco settlements of the 1990s. The tobacco analogy rests on a structural similarity: internal documents showing companies knew about harms, continued to market their products aggressively, and misled the public about the risks.

More than 40 state attorneys general have now filed lawsuits against Meta specifically, with claims centering on its role in contributing to a youth mental health crisis. California is running a parallel trial. A consolidated federal trial in Northern California is scheduled for later this year covering school districts nationwide.

“Meta’s house of cards is beginning to fall,” said Sacha Haworth, executive director of the Tech Oversight Project. “For years, it’s been glaringly obvious that Meta has failed to stop sexual predators from turning online interactions into real-world harm.”