Casey Newton found out he’d become an AI editor last Friday. Not fired, not replaced — just repurposed. Grammarly had launched a feature called Expert Review, which offered users writing feedback from “leading professionals, authors, and subject-matter experts.” Newton was on the list. So were Stephen King, Neil deGrasse Tyson, and Carl Sagan — the last of whom has been dead for nearly thirty years.

The AI-generated feedback wasn’t written by any of these people. Grammarly ran the text through a model, presumably trained on publicly available work by or about them, and attached their names as a credibility stamp. A disclaimer buried several hundred words into a support page clarified that “references to experts are for informational purposes only.” Newton’s response was characteristically dry: “I’ve long assumed that before too long, AI might take my job. I just assumed that someone would tell me when it happened.”

Grammarly has since pulled the feature. But the move reveals something more durable than one company’s consent problem.

What happened here isn’t primarily a story about intellectual property law or corporate overreach. It’s a demonstration that expertise is now technically separable from the person who built it. Your published thinking — the years of columns and analysis and takes — becomes training data. The model synthesizes it. And the synthesis can be deployed at a scale and speed that no individual can match. Attached to your name, or not.

The traditional scarcity model of expertise runs like this: ideas are hard to spread, so the person who has them accrues status by being the conduit. You publish to build a reputation, and that reputation lets you sell the judgment you haven’t published yet. The bottleneck was distribution.

AI broke that. Distribution is now free and instantaneous. The ideas themselves aren’t scarce anymore — they can be replicated, recombined, and resold faster than you can update your LinkedIn bio. What remains scarce is the person: the timing, the relationships, the judgment under conditions.

Those three are worth separating. Timing means your unpublished thinking is still yours — once it’s out, it’s feedstock. The Grammarly situation happened because the material was already public. Relationships are harder to replicate than content, because they’re built on trust in a specific person, not their archives. And synthesis under conditions — the judgment call in a live conversation, reading a specific room, advising a particular client — is something AI can produce outputs that look like, but it’s doing pattern-matching against your past work, not reasoning about the present situation.

This creates a real tension for anyone who publishes openly. Full gating — stop publishing, lock everything behind paywalls and private channels — destroys the reach mechanism that makes expertise valuable in the first place. Nobody impersonates an expert nobody’s heard of. The point of Grammarly using Casey Newton’s name was that people recognized it.

The answer isn’t gatekeeping. It’s sequencing. Publish the sharpest version of an idea first to a trusted audience — the people who know you, who’ll see the provenance. Then synthesize it more broadly. Then write the polished public version. The scarcity you’re protecting isn’t the idea. It’s access to the person.

Newton’s quote — “I just assumed someone would tell me when it happened” — is funny. It’s also the wrong frame. The question was never whether AI would take his job. It’s whether the conditions under which expertise has value are changing fast enough that the publishing strategy that built his career is the same one that protects it going forward.

Those aren’t the same strategy. The sooner people recognize that, the more options they have.