I’ll be keeping an eye on this.
A class action lawsuit accusing X.AI of enabling the mass creation of sexualized deepfakes marks a new stage of liability in cases involving artificial intelligence.
Now, plaintiff lawyers are focusing on technology platforms themselves—not just users—which could put AI companies at risk for large-scale lawsuits.
And as lawsuits and regulatory actions mount, attorneys are watching who courts will hold accountable for AI-driven wrongdoings.
In the latest case, litigant Jane Doe from South Carolina filed a class action lawsuit against X.AI, claiming its AI chatbot Grok created and shared non-consensual sexualized deepfake images of women.
Attorney Sophia M. Rios of Berger Montague filed the case in the U.S. District Court for the Northern District of California, San Jose Division.
“X.AI’s conduct is despicable and has harmed thousands of women and children who were digitally stripped and forced into sexual situations that they never consented to,” Rios claimed in an emailed statement. “This class action seeks relief for those harmed by X.AI’s monetization of image-based abuse and harassment.”
[…]
The lawsuit asserts products liability arising from design and manufacturing defects, negligence for failing to use standard safety measures, violations of California’s right-of-publicity laws, defamation, intentional infliction of emotional distress, privacy violations under California law, and unfair business practices.
This legal approach is similar to a recent case in which Ashley St. Clair, the mother of one of Elon Musk’s children, filed a lawsuit in federal court in New York. She claims the Grok chatbot is “unreasonably dangerous as designed.”
“As deepfake litigation continues to grow, I see clear parallels to earlier accountability battles,” Mick S. Grewal, who represented 111 of Gymnastic Coach Larry Nassar’s survivors and secured a $500 million settlement with Michigan State University, said in a statement.
“These days, plaintiffs are shifting their focus from bad-acting individuals to AI systems themselves, as illustrated by recent lawsuits targeting the platforms used to cause harm,” Grewal said. “This marks a meaningful evolution in how courts could make AI-driven technology responsible for the harmful actions of humans.”
For many years, Section 230 of the Communications Decency Act of 1996 provided a nearly impenetrable defense for tech firms: They argued they were merely hosting content from users, not generating it, David Himelfarb, managing partner at Toronto-based Himelfarb Proszanski, said in an email.
“That defense is falling apart,” Himelfarb said.
When plaintiffs claim AI systems are “unreasonably dangerous as designed,” they are using products liability principles in a manner that courts have not encountered previously, Himelfarb said.
I for one welcome this development and will be rooting for the plaintiffs. There are other avenues that could be taken to impose some controls on these things and their owners, but there’s no reason why it has to be a one-or-the-other approach. Let’s try them all and see what works best.
