Meta is ending funding to its Oversight Board. Not immediately — the money runs through 2028, and conversations about what comes next are apparently ongoing. But the direction is clear: less funding each year, staff bracing for layoffs, Mark Zuckerberg increasingly involved in decisions the board was created to buffer.
The standard read is that this is about cost. Meta is building AI infrastructure at scale. The board costs money. Something gives. That framing is accurate but it inverts the causality.
Meta isn’t cutting the board because AI infrastructure costs went up. It’s cutting the board because AI systems made independent oversight structurally optional — and “optional” in a company running on quarterly earnings and GPU capex is just another word for “expensive.”
Here’s what actually changed: the Oversight Board was built around cases. A post stays up or comes down; the board reviews, deliberates, publishes reasoning, sometimes reverses the call. That model assumes a human somewhere made the original decision — someone who could be questioned, whose reasoning could be examined, whose decision could be appealed.
Meta is now shifting trust and safety to automated systems. Casey Newton, reporting for Platformer, noted this directly: the company has been “shifting more of its trust and safety functions from humans to automated systems” while simultaneously discussing the board’s end. Referrals from Meta to the board have slowed.
The timing isn’t a coincidence. When a model makes a billion content decisions a day, the appeals structure breaks. You can’t convene a board to deliberate on a fraction of automated moderation calls at scale. The board’s entire operating model assumes the volume is small enough to review.
The obvious counterargument: the Oversight Board never had real power anyway. Policy recommendations were nonbinding. Zuckerberg personally rewrote Meta’s hate speech policies ahead of Trump’s inauguration without going near the board. The governance accountability was already symbolic.
That’s right. But symbolic structures do something even when their decisions are overridden: they impose a cost on being sloppy.
The board created a forcing function. Cases had to be framed as arguments. Decisions had to be documented. Reasoning had to be published in a form that outside observers could scrutinize. Even when the outcome was predetermined, you had to defend the call in writing to a group of independent jurists.
That cost disappears with automation. The model doesn’t produce a documented rationale. There’s no case file. The decision lives in weights nobody can read.
This is the governance gap opening up across the whole AI stack — not just at Meta. The accountability infrastructure built around human decision-making assumes that somewhere in the system, a person made a judgment call that can be examined and reversed. Oversight boards, audit committees, appeals processes — all of it was designed for a world where decisions have authors. Automated systems at scale don’t produce those artifacts. They produce outputs.
Meta could choose to build new accountability mechanisms suited to automated decision-making. It appears to be choosing not to.
The AI infrastructure buildout is explicitly cited as a reason for the cost pressure. So the pattern completes itself: AI reduces the overhead of governance while simultaneously creating the need for more of it — and companies are choosing to pocket the savings.
Governance didn’t fail here. It became a line item.