A U.S. district court in the Northern District of California ruled this week that online advertising platforms can be treated as 'makers' of fraudulent statements when their AI systems exercise ultimate authority over how ads are assembled and delivered. It is the clearest signal yet that platforms cannot hide behind Section 230-style framings when a model, not a human, is the proximate author.
The ruling arrives alongside quieter but equally consequential news: U.S. courts imposed at least $145,000 in sanctions in the first quarter of 2026 on attorneys who filed briefs containing AI-generated citation errors. Judges are no longer treating hallucinations as novelties.
The combined effect tightens the compliance perimeter around generative AI in two high-stakes domains. For ad platforms, it creates a strong incentive to either keep a meaningful human in the loop or invest in verification layers that can defensibly flag fabricated claims before delivery. For law firms, the message has become unambiguous: verify every citation, or pay.
The deeper question the ruling surfaces is about authorship and agency. If a model 'exercises ultimate authority' and is therefore a maker, what does that imply for other automated systems — customer service agents, triage tools in healthcare, AI-driven credit decisions? The court didn't answer. The next ones will have to.
