The rapidly evolving landscape of generative AI is facing new legal scrutiny, with a class-action Grammarly AI lawsuit brought forward by prominent journalist Julia Angwin. The complaint, filed this week, alleges the company violated the publicity and privacy rights of individuals by utilizing their identities within its “Expert Review” AI feature without explicit permission.
The controversy centers on Grammarly’s “Expert Review” tool, which generated AI suggestions for users, purportedly attributing them to recognized experts and influencers. Angwin, along with several other journalists and academics, discovered their names and professional personas were being leveraged by the system to add perceived authority to the AI-generated content.
The class-action complaint, Angwin v. Superhuman Platform, Inc., asserts that the platform engaged in the unauthorized commercial usage of these experts’ identities. For many targeted by the feature, discovery of their involvement came as a shock. Angwin reportedly confirmed her inclusion in the database after investigative reporting by Platformer highlighted that numerous public figures were being listed as “experts” without their knowledge or consent.
Implications for AI Feature Development
In response to the mounting backlash and the legal challenge, the company behind the feature announced it would be disabling the tool. This follows an earlier attempt to manage the fallout by launching an opt-out email inbox, a move critics described as insufficient given the scale of the alleged unauthorized data usage.
In a statement addressing the incident, CEO Shishir Mehrotra acknowledged the failure, stating, “The agent was designed to help users discover influential perspectives… [but] we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward.”
This legal action marks a significant moment in the ongoing debate over data scraping and user consent in the generative AI era. As tech companies continue to integrate AI agents into consumer software, the Grammarly AI lawsuit serves as a stark reminder of the ethical and legal boundaries regarding the commercialization of digital identities. Legal experts anticipate this case could set a vital precedent for how platforms must secure consent before incorporating human personas into automated systems.






