The New Yorker just published a gripping investigation into Sam Altman and OpenAI. Based on more than 100 interviews, never-before-disclosed internal memos, and years of documents. It's worth reading in full. But a few things stood out.
What the investigation found
Before firing Altman in 2023, chief scientist Ilya Sutskever compiled 70 pages of Slack messages and HR documents alleging a "consistent pattern of lying." He told the board: "I don't think Sam is the guy who should have his finger on the button."
OpenAI promised its superalignment team 20% of compute for safety research. The actual number? Between 1 and 2%. On the oldest hardware. The team was dissolved without completing its mission.
The investigation after Altman's reinstatement? No written report was ever produced. Just oral briefings to two board members Altman helped select.
And now OpenAI is building data centres in the UAE seven times larger than Central Park, funded by autocracies, while lobbying against every meaningful regulation.
This is not just about one CEO
This is about what happens when safety depends on promises instead of structure.
Every charter got rewritten. Every safety team got dissolved. Every commitment got renegotiated when the money got big enough. Anthropic weakened its responsible scaling policy. Google dropped restrictions to win military contracts. The pattern is industry-wide.
Why this matters for your organisation
If you're using AI in your organisation today, you're trusting someone's promise that your data is handled responsibly. That safety research is properly funded. That the rules won't change when the next funding round closes.
The New Yorker investigation shows exactly how much those promises are worth. Not because these are bad people. Because the incentive structure makes it almost impossible to keep them.
The question was never "which AI company can we trust?"
The question is: how do you build a system where trust doesn't depend on one company keeping its word?
This is exactly why we built Aimable around Trusted Intelligence. Not trust in a vendor's promises. Not trust in a CEO's good intentions. Structural trust.
Your data is protected before it ever reaches any model. Your answers are grounded in your own verified sources. Your rules are enforced automatically, not by policy documents that can be quietly rewritten.
The model provider can change their charter, fire their safety team, or sell out to the highest bidder. Your organisation stays protected. Because the trust lives in the architecture, not in someone's promise.
Read the full New Yorker investigation