Skip to content
Tech FrontlineBiotech & HealthPolicy & LawGrowth & LifeSpotlight
Set Interest Preferences中文
Policy & Law

The Identity Crisis of Generative AI: Grammarly Faces Class Action Over Unauthorized 'Expert' Cloning

Grammarly is facing a class-action lawsuit led by journalist Julia Angwin over its "Expert Review" feature, which allegedly cloned the identities of writers and academics without consent. The feature presented AI suggestions as being inspired by specific human experts, leading to charges of identity theft and violation of the Right of Publicity. Grammarly has since disabled the tool, marking a pivotal moment in AI ethics and legal boundaries.

Jessy
Jessy
· 3 min read
Updated Mar 12, 2026
A courtroom setting where a digital silhouette of a human writer is being fragmented into binary cod

⚡ TL;DR

Grammarly faces a class-action lawsuit for unauthorized AI cloning of writers' identities, leading to a major debate on digital rights.

The Legal Firestorm: Grammarly's AI 'Expert' Under Scrutiny

Generative AI has reached a critical legal impasse, where technological innovation clashes with the fundamental rights of personal identity and intellectual property. Grammarly, the ubiquitous writing enhancement platform, is currently facing a formidable class-action lawsuit that could redefine the boundaries of AI personalization. At the center of the storm is a recently launched—and now abruptly disabled—feature called "Expert Review." This feature purportedly provided AI-driven editing suggestions that were "inspired by" well-known authors, journalists, and academics. However, a group of prominent experts claims their identities were cloned without their knowledge or consent for commercial gain.

The lead plaintiff in this landmark case is the award-winning journalist Julia Angwin. The complaint, filed on March 11, 2026, alleges that Grammarly engaged in a sophisticated form of "identity theft" by presenting AI outputs as if they carried the intellectual weight and reputation of specific human beings. As reported by Wired and The Verge, the feature presented these expert personas as marketing tools to entice users into premium tiers. Following the initial backlash and the filing of the lawsuit, Grammarly disabled the feature on Wednesday, stating they are "reimagining" the tool to ensure experts have "real control."

The Core of the Case: Right of Publicity and the Lanham Act

Legal scholars are closely watching this case, as it rests on the "Right of Publicity" and the Lanham Act (15 U.S.C. § 1125), which prohibits false endorsement. Unlike copyright law, which protects specific works of art or literature, the Right of Publicity protects an individual's right to control the commercial use of their name, likeness, and overall identity. By labeling an AI model's output with the name of a specific expert like Julia Angwin, Grammarly allegedly created a false association that misled consumers into believing the expert endorsed the AI’s suggestions.

Defense attorneys might argue that the AI was merely trained on public data to emulate a style, which often falls under "fair use." However, the plaintiffs contend that the violation occurred at the point of "commercialization." Using a person's reputation to brand a software product is a far more direct exploitation than mere data analysis. According to California Civil Code Section 3344, unauthorized use of a person's identity in commerce can trigger significant statutory damages. This case will likely test whether an AI "style transfer" or "persona" is legally distinct from the person it imitates.

Market Sentiment and Trends: A Wake-Up Call for AI Ethics

The reaction to the lawsuit has been swift and significant. Google Trends data shows a surge in search interest for "AI identity theft" and "Grammarly lawsuit" across major tech hubs. In regions like California and New York, where the creative and legal sectors intersect, the interest scores have hit peak levels over the past 48 hours. This indicates a broader anxiety among content creators about their digital legacy in an era where AI can recreate their life's work in milliseconds.

Industry analysts believe that Grammarly's misstep reflects a broader "move fast and break things" mentality that still permeates AI development. In the race to make AI feel more human and authoritative, developers have often bypassed the legal and ethical considerations of consent. The disabling of the "Expert Review" feature is seen as a major concession, but for many in the writing community, the damage to trust is already done. The lawsuit serves as a warning to other generative AI firms that using human expertise as a "vibe" or "label" without explicit licensing deals is no longer a viable strategy.

Shaping the Future of AI Product Design

The fallout from the Grammarly case is expected to trigger a paradigm shift in how AI products are designed and marketed. We are likely moving away from an era of "unofficial inspirations" and toward a more formal "licensing era." Much like how Spotify must license music or how stock photo sites pay contributors, AI platforms will need to establish clear revenue-sharing models with the individuals whose personas they seek to emulate.

TechCrunch reports that several burgeoning AI startups are already shifting their models to prioritize "human-in-the-loop" licensing. Some voice-cloning startups have begun paying voice actors "digital royalties" whenever their AI counterpart is used. The Grammarly lawsuit expands this conversation to the realm of thought, logic, and literary style. If an AI uses an economist’s specific framework to provide financial advice, the legal and financial obligations of the platform become a central question that the courts must now answer.

Outlook: Towards a 'Digital Persona' Protection Act

As we look toward the future, this class-action lawsuit may pave the way for new legislation specifically designed to protect the "digital persona." This could include federal-level protections that supersede the current patchwork of state laws. With the rise of AI Agents that act on behalf of individuals, the need for a robust legal framework to govern identity has never been more urgent.

Grammarly’s decision to "reimagine" its feature is a tactical retreat, but the broader war over AI and identity is just beginning. The outcome of the Angwin v. Grammarly case will serve as the first major boundary set by the legal system against the overreach of generative AI. It is a vital step toward a future where technology respects the human effort that makes intelligence possible in the first place. The tech world remains on high alert as the next chapter of digital rights is written in the courtroom.

FAQ

Grammarly 的「專家審查」功能究竟做了什麼?

該功能利用 AI 生成寫作修改建議,並在 UI 中標註這些建議是「受到某知名作家(如 Julia Angwin)啟發」。問題在於,這些被點名的專家從未同意將其姓名或聲譽用於此項商業功能。

為什麼這被稱為「身份盜用」?

訴方認為,Grammarly 不僅僅是讓 AI 學習某人的寫作風格,而是直接利用這些專家的姓名和公眾聲譽來行銷其付費產品,這在法律上侵犯了個人的「公開權」(Right of Publicity)。

這場訴訟會對其他 AI 公司產生什麼影響?

這將迫使 AI 開發商在將產品「人格化」或「品牌化」時必須更加謹慎。未來任何涉及特定個人風格或名聲的 AI 功能,可能都需要簽署正式的授權合約並支付分潤,而非含糊地標註「致敬」。