The Authors Strike Back: My Style is Not Your Training Data
Grammarly, the ubiquitous AI-powered writing assistant, has found itself at the center of a landmark legal battle over intellectual property and identity rights. TechCrunch reports that investigative journalist Julia Angwin is leading a class-action lawsuit against the company, alleging that Grammarly turned writers into 'AI personas' without their consent. The lawsuit claims that Grammarly's 'AI expert review' feature effectively sold digital replicas of prominent authors' writing styles to other users, profiting from their identities without compensation or permission.
Legal Grounds: The Right of Publicity and the 'NO FAKES Act'
The case centers on the 'Right of Publicity,' a legal doctrine that prevents the unauthorized commercial use of an individual’s identity. Under laws such as California Civil Code 3344, misappropriating a person's name or likeness for commercial gain is actionable. The legal challenge tests whether an author's unique prose style can be protected as a digital extension of their identity. This litigation coincides with the rise of the 'NO FAKES Act' in the U.S. Congress, a bipartisan effort aimed at curbing the proliferation of unauthorized AI-generated digital replicas.
Grammarly’s Retreat: Pulling Features Amid Backlash
In response to the growing legal pressure and public outcry from the writing community, Grammarly has officially disabled the controversial feature. As reported by BBC Tech, the company pulled the 'author-impersonation' tool, which allowed users to adopt the voice of specific professional writers. While Grammarly claims it is reviewing its data practices, the FTC has signaled increased scrutiny under Section 5 of the FTC Act regarding 'unfair' data harvesting where creators are not adequately notified that their data will be used to create commercial clones.
Broader Context: Data Privacy and AI Safety Failures
The Grammarly lawsuit is part of a broader trend of technical and ethical failures in the tech sector. BBC Tech also reports a massive data leak at Lloyds Banking Group, where customers were able to see the private transactions of other users due to an app glitch. Meanwhile, a study by Cambridge University researchers warned that AI toys for children often misread emotional cues and respond with inappropriate or harmful content. These incidents collectively highlight the growing gap between rapid AI deployment and the safety protocols required to protect individual privacy.
Future Outlook: Setting the Standards for AI Ethics
The resolution of the Grammarly case will likely set a major precedent for the generative AI industry. It forces companies to confront a fundamental question: can individual creativity be harvested as a commodity? Moving forward, 'opt-in' mandates for training data may become the industry standard. For creators, this is a fight for digital sovereignty; for tech firms, it is a warning that the 'move fast and break things' era of data usage is meeting its legal match in the form of identity protection laws.

