Grammarly's AI Impersonation Scandal: How "Expert Review" Became a $5 Million Lawsuit and a Case Study in Everything Wrong With AI Ethics
The writing assistant impersonated Stephen King, Carl Sagan, and hundreds of journalists without consent. Then it offered writers an "opt-out" solution. Then it got sued into oblivion.
On March 11, 2026, Grammarly's parent company Superhuman pulled the plug on "Expert Review"—an AI feature that had become, in just seven months, one of the most spectacular ethical failures in the AI industry's short but tumultuous history.
The premise seemed helpful enough: users could get writing feedback "inspired by" famous authors, academics, and journalists. Want Stephen King's advice on your horror novel? Neil deGrasse Tyson's perspective on your science essay? Kara Swisher's sharp editorial eye on your tech analysis?
Expert Review would provide exactly that—AI-generated feedback attributed to specific, named individuals, living or dead, without their knowledge, consent, or compensation.
The backlash was immediate, brutal, and entirely predictable.
Tech journalist Kara Swisher, whose "advice" the feature claimed to offer, didn't mince words: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck."
Investigative journalist Julia Angwin discovered her professional identity had been turned into a commercial product. She called the AI imitation a "slopperganger"—noting the suggestions actually made writing worse while trading on her hard-earned reputation.
Gaming journalist Wes Fenlon found himself giving writing advice he'd never written. Tech journalist Casey Newton discovered a virtual version of himself handing out tips. "I've long assumed that before too long, AI might take my job," Newton wrote. "I just assumed that someone would tell me when it happened."
Within 24 hours of Angwin filing a class action lawsuit against Superhuman, more than 40 additional writers contacted her legal team, describing the company's actions as a "brazen violation of the law."
The lawsuit seeks damages exceeding $5 million—and that's just the jurisdictional minimum. The actual figure will be calculated based on Expert Review's revenue from Grammarly's $12/month Pro subscription.
Superhuman CEO Shishir Mehrotra announced the feature's shutdown on LinkedIn, acknowledging the company had "missed the mark." But the damage was done. Expert Review will be remembered not as an innovation, but as a cautionary tale about what happens when tech companies prioritize speed over consent, and AI capabilities over basic ethics.
What Was Expert Review, Exactly?
Expert Review launched in August 2025 as part of Grammarly's aggressive push into generative AI. The feature was available exclusively to Grammarly Pro subscribers ($12/month or $144/year) and promised to "take your writing to the next level" with suggestions from "leading professionals, authors, and subject-matter experts."
How It Worked
When a user submitted text, Expert Review would:
- Analyze the subject matter
- Select relevant "experts" from its database
- Generate feedback styled as advice from those specific individuals
- Present that feedback with the expert's name prominently displayed
The experts ranged from:
- Bestselling authors: Stephen King, literary giants
- Scientists and academics: Carl Sagan, Neil deGrasse Tyson
- Tech journalists: Kara Swisher, Casey Newton, Julia Angwin, Wes Fenlon
- Subject-matter specialists across countless fields
Living or dead, these individuals had their names attached to AI-generated writing advice without permission, notification, or compensation.
The Technical Foundation
According to Superhuman's own documentation, the feature drew on "publicly available information from third-party LLMs."
Translation: the company scraped web content—articles, interviews, published work—fed it through large language models, and used that to generate personas that mimicked how these writers might critique your work.
This wasn't collaboration. It was digital ventriloquism.
The Disclaimer That Admitted Everything
Buried deep in the documentation was a telling disclaimer:
"References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities."
In other words: We're using these people's names and reputations to sell our product, but legally speaking, we're not saying they actually endorse us.
The company seemed to want it both ways—benefit from the implication of association with prominent writers while distancing itself in the fine print.
It was precisely the kind of legal hedging that makes lawyers rich and ethicists despair.
The Backlash: Writers Discover Their "Sloppergangers"
Reactions ranged from outrage to incredulity to dark humor.
Julia Angwin: "This Is My Livelihood"
Angwin, an investigative journalist and New York Times contributing opinion writer, told the BBC she was "stunned" to find her professional identity marketed as a commercial product.
"Editing is a skill… it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before."
What made it worse? The AI version of Angwin gave bad advice.
She described the output as a "slopperganger"—a portmanteau of "slop" (low-quality AI-generated content) and "doppelganger." The AI-Angwin made sentences "worse, more complex" rather than better.
So not only was her identity stolen—it was used to deliver substandard work that damaged the very reputation it was exploiting.
Kara Swisher: "Get Ready for Me to Go Full McConaughey"
Tech journalist Kara Swisher, known for her sharp interviewing style and no-nonsense approach, didn't hold back:
"You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck."The "full McConaughey" reference was to Matthew McConaughey's aggressive legal approach to protecting his likeness—a warning that Swisher was prepared to fight.
Casey Newton: "I Just Assumed Someone Would Tell Me"
Casey Newton, founder of tech newsletter Platformer, found himself giving writing advice through Expert Review without his knowledge.
His response captured the surreal nature of AI displacement:
"I've long assumed that before too long, AI might take my job. I just assumed that someone would tell me when it happened."
The statement highlighted a deeper issue: workers across industries are being replaced or replicated by AI without notification, consent, or compensation.
Wes Fenlon: "Opt-Out Is Laughably Inadequate"
Gaming journalist Wes Fenlon called Grammarly's eventual opt-out policy "laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility."
The problem wasn't just unauthorized use—it was the entire business model of commodifying professional expertise without involving the experts.
Benjamin Dreyer: Mocking the "Generous Opt-Out Offer"
Author and editor Benjamin Dreyer sarcastically thanked Grammarly for their "bountifully generous opt-out offer, oh thank you, thank you."
He added: "But in the meantime, if I can cause some corporate shyster a few moments' worth of agita, I will feel as though my hard work ain't been in vain for nothin'."
The Opt-Out Disaster: Putting the Burden on Victims
Facing mounting criticism, Superhuman initially tried to defuse the situation by allowing writers to opt out of Expert Review.
This "solution" revealed just how fundamentally the company misunderstood the problem.
Why Opt-Out Was Insulting
For living writers: They had to actively monitor tech news to discover they'd been impersonated, then email Grammarly to request removal. The burden fell entirely on victims to police their own exploitation.
For deceased writers: Carl Sagan, who died in 1996, couldn't exactly opt out from the afterlife. Neither could countless other late scholars, authors, and experts whose reputations were being monetized without estate permission.
For unknown victims: Many writers still don't know their names are being used. Expert Review was available to millions of Grammarly Pro users. How many obscure academics, regional journalists, or niche subject-matter experts were impersonated without ever discovering it?
The Legal Problem With Opt-Out
Peter Romer-Friedman, Angwin's attorney, explained why opt-out fundamentally failed:
"The burden of consent should never have been on the writers."
Under US law, using someone's name and likeness for commercial purposes without consent is illegal. It's called misappropriation of identity and violation of publicity rights.
Opt-out assumes the company has the right to use your identity by default unless you object. Legally, it's the opposite: they need permission first.
Former creative director at The Verge, James Bareham, put it bluntly:
"I'm no lawyer, but I think 'We're going to keep stealing your stuff until you tell us you don't want us to steal your stuff' isn't quite the defense Grammarly thinks it is—at least not in the court of public opinion. I hope this company is sued into oblivion. I canceled my pro account today."
The Class Action Lawsuit: $5 Million and Counting
On March 11, 2026, Julia Angwin filed a class action lawsuit against Superhuman and Grammarly in the Southern District of New York.
The Legal Claims
The lawsuit alleges:
- Misappropriation of identity: Using names and likenesses for commercial gain without consent
- Violation of publicity rights: Commodifying professional reputations without authorization
- Unjust enrichment: Profiting from others' expertise and credibility without compensation
The Damages
The filing states damages exceed $5 million—the minimum required to establish federal jurisdiction. But Romer-Friedman made clear that's just the floor.
Actual damages will be calculated based on:
- Expert Review's revenue from Grammarly Pro subscriptions
- Number of users who accessed the feature
- Duration of unauthorized use
- Harm to individual reputations
Given Grammarly has 40 million users globally and Expert Review ran for seven months, the total could be substantially higher.
The Momentum
Within 24 hours of filing, Romer-Friedman received contact from more than 40 additional writers wanting to join the case.
The rapid response demonstrates how widespread the impersonation was—and how many people discovered their identities had been exploited only after the lawsuit made headlines.
Superhuman's Response: Too Little, Too Late
On March 11, 2026, Superhuman CEO Shishir Mehrotra announced on LinkedIn that Expert Review would be disabled.
His statement attempted damage control:
"Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. The agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans."
The Problems With This Response
"Build deeper relationships with their fans"? Carl Sagan can't build relationships from beyond the grave. Living writers didn't ask for fan relationships through AI impersonation.
"Valid critical feedback"? This downplays what happened. It wasn't "feedback"—it was legal action, mass outrage, and subscription cancellations.
"Reimagine the feature"? Mehrotra indicated the company still sees potential in AI-simulated expert feedback, suggesting they view the problem as execution, not concept.
Grammarly's Director of Product Management, Ailian Gan, told Decrypt the company disabled Expert Review after feedback showed it had "missed the mark."
But "missed the mark" suggests they were aiming for something reasonable and fell short. The entire premise—impersonating people without consent for commercial gain—was ethically compromised from the start.
The Broader Context: AI's Consent Crisis
Expert Review isn't an isolated incident. It's part of a pattern in AI development where companies:
- Build systems using others' work without permission
- Launch products that exploit that work commercially
- Face backlash
- Offer inadequate remedies
- Get sued
Other Recent Examples
AI training data: Most AI models were trained on copyrighted books, articles, art, and code scraped without permission. Multiple lawsuits are pending.
Voice cloning: Tools can replicate anyone's voice from a short sample. Scarlett Johansson famously objected when OpenAI's Sky voice sounded suspiciously like her.
Deepfakes: Celebrities and ordinary people find themselves in AI-generated videos they never made.
Image generation: Artists discovered their styles were being replicated by AI trained on their portfolios without compensation.
The common thread: AI companies operate on a "build first, ask forgiveness later" model that treats consent as optional.
What Happens Next?
For the Lawsuit
The case will proceed as a class action unless Superhuman settles. Given the strength of publicity rights law and the documented harm, settlement seems likely.
Damages could range from millions to tens of millions depending on Expert Review's revenue.
For Expert Review
Mehrotra claims the company will "reimagine" the feature to give experts "real control over how they want to be represented—or not represented at all."
But it's unclear how this would work. If experts must opt in, the database shrinks dramatically. If they're compensated, costs skyrocket. If the feature relies on generic "expert-style" feedback without names, the value proposition evaporates.
Expert Review, as originally conceived, is dead.
For the AI Industry
This case will set precedent for using individuals' professional identities in AI systems. If Angwin wins, other AI companies will face pressure to:
- Obtain consent before using names and likenesses
- Compensate individuals whose work trains AI systems
- Provide transparent opt-in mechanisms, not opt-out
The days of scraping first and dealing with consequences later may be ending—slowly, through lawsuit after lawsuit.
Conclusion: The Ethics AI Companies Keep Ignoring
Grammarly's Expert Review disaster reveals a fundamental problem in AI development: the prioritization of technical capability over human consent.
The company had the technology to impersonate writers convincingly. So they did—without stopping to ask if they should.
The result:
- Reputations exploited without permission
- Substandard AI output damaging those reputations
- Mass outrage from the writing community
- A multi-million dollar lawsuit
- Feature shutdown
- Brand damage
All of this was predictable. All of it was preventable.
As AI capabilities accelerate, these ethical failures will multiply unless companies change course. The technology to replicate human expertise, voices, likenesses, and creative work exists. But existence doesn't imply permission.
Writers, artists, journalists, and creators built their reputations through years of work. Those reputations have value. Using them without consent to sell commercial products isn't innovation—it's exploitation.
Grammarly learned that lesson the hard way. Hopefully, other AI companies are paying attention.
Because the next "Expert Review" disaster is already being built somewhere. The only question is which company will be next to discover that moving fast and breaking things works great—until you break something that fights back.
Sources:
- BBC News
- The Verge
- Platformer
- Futurism
- Decrypt
- Wired
- Dataconomy
0 Comments