What a 70-page memo, 200 pages of private notes, and over 100 firsthand accounts tell us about the man running OpenAI.
Fall 2023.
OpenAI Chief Scientist Ilya Sutskever sat down at his computer and finished a 70-page document.
Compiled from Slack message logs, HR communications, and internal meeting minutes, it was built to answer a single question: Can Sam Altman, the man in charge of what may be the most dangerous technology in human history, actually be trusted?
Sutskever's answer was on the first line of the first page. The heading read: "Sam exhibits a consistent behavioral pattern..."
First: Lying.
Two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published a sweeping exposé in The New Yorker. They spoke to over 100 people, obtained previously unpublished internal memos, and got hold of more than 200 pages of private notes kept by Anthropic co-founder Dario Amodei during his time at OpenAI. Pieced together, these documents paint a picture far uglier than the 2023 boardroom drama: how OpenAI went from a nonprofit built to protect humanity to a commercial juggernaut, with nearly every safety guardrail torn down, often by the same pair of hands.
Amodei put it bluntly in his notes: "OpenAI's problem is Sam himself."
OpenAI's "Original Sin"
To understand why this report matters, you first need to grasp just how unusual OpenAI is.
In 2015, Altman and a group of Silicon Valley elites did something almost unheard of in business history: they set up a nonprofit to develop what could become the most powerful technology ever created. The board's mandate was explicit: safety came before the company's success, even before its survival. In plain terms, if OpenAI's AI ever turned dangerous, the board was obligated to shut the whole thing down.
The entire structure hinged on one assumption: whoever controls AGI must be an extraordinarily honest person.
What if that bet was wrong?
The report's central bombshell is that 70-page document. Sutskever doesn't play office politics. He is one of the world's foremost AI scientists. But by 2023, he had become increasingly convinced of one thing: Altman was systematically lying to executives and the board.
One concrete example: in December 2022, Altman told the board that several upcoming GPT-4 features had cleared safety review. Board member Toner asked to see the approval documentation and discovered that two of the most contentious features, user-customizable fine-tuning and personal assistant deployment, had never been approved by the safety panel.
Something even more flagrant happened in India. An employee reported "the violation" to another board member: Microsoft had launched an early version of ChatGPT in India before completing required safety reviews.
Sutskever documented another incident in his memo: Altman told former CTO Mira Murati that the safety approval process wasn't that important, claiming the General Counsel had already signed off. Murati went to verify. The General Counsel's response: "I have no idea where Sam got that impression."
Amodei's 200-Page Private Notes
Sutskever's document reads like a prosecutor's indictment. Amodei's 200-plus pages are more like a witness's diary, written at the scene of a crime.
During his years leading safety at OpenAI, Amodei watched the company retreat inch by inch under commercial pressure. In his notes, he recorded a key detail from Microsoft's 2019 investment deal: he had written a "Merger and Assistance" clause into OpenAI's charter, stipulating that if another organization found a safer path to AGI, OpenAI would stop competing and help that organization instead. It was the single most important safety provision in the entire deal, as far as he was concerned.
Just before the deal closed, Amodei made a troubling discovery: Microsoft had secured veto power over that clause. The implication was stark. Even if a competitor someday found a better path, Microsoft could unilaterally block OpenAI from honoring its obligation. The clause stayed on paper, but from the moment the ink dried, it was dead letter.
Amodei went on to leave OpenAI and co-found Anthropic. The competition between the two companies runs deeper than market share; it reflects a fundamental disagreement about how AI should be built.
The Vanishing 20% Compute Pledge
One detail in the report is particularly unnerving: the story of OpenAI's "Superalignment Team."
In mid-2023, Altman emailed a Ph.D. student at Berkeley who was researching "deceptive alignment," the phenomenon where an AI plays nice during testing but pursues its own objectives once deployed. Altman said he was deeply worried about the problem and was considering a $1 billion global research prize. The student was inspired, left his program, and joined OpenAI.
Then Altman changed course. No external prize. Instead, OpenAI would create an internal "Superalignment Team." The company publicly announced it would devote "20% of its existing compute" to the effort, a commitment valued at over $1 billion. The announcement used grave language, warning that failure to solve alignment could lead to "human disempowerment, or even human extinction."
Jan Leike, who was tapped to lead the team, later told reporters the pledge itself functioned as a powerful "talent retention tool."
The reality? Four people who worked on or closely with the team said the actual allocation came to roughly 1 to 2 percent of OpenAI's total compute, running on the company's oldest hardware. The team was eventually dissolved. Its mission remained unfinished.
When reporters asked to speak with OpenAI staff responsible for "existential safety" research, the company's PR team delivered a response that bordered on parody: "That's not... an actual thing."
Altman, for his part, was candid. He told reporters his "instincts don't really line up with a lot of traditional AI safety thinking," and that OpenAI would continue to pursue "safety projects, or at least projects adjacent to safety."
A CFO Sidelined, an IPO Full Speed Ahead
The New Yorker report was only half the bad news that day. Hours apart, The Information broke another major story: a deep rift between OpenAI CFO Sarah Friar and Altman.
Friar had privately told colleagues she didn't think OpenAI was ready to go public this year. Two reasons: the volume of procedural and organizational groundwork still outstanding, and the financial exposure created by Altman's pledge to spend $600 billion on compute over five years. She wasn't even sure the company's revenue growth could support commitments of that scale.
Altman, however, wants to push for an IPO in Q4.
It gets stranger. Friar no longer reports to Altman directly. As of August 2025, she reports to Fidji Simo, OpenAI's CEO of Applications. Simo went on medical leave last week. Step back and take in the full picture: a company sprinting toward an IPO where the CEO and CFO are in fundamental disagreement, the CFO doesn't report to the CEO, and her direct superior is on leave.
Even executives inside Microsoft have reportedly grown frustrated, accusing Altman of "distorting facts, going back on his word, and repeatedly overturning agreements that were already settled." One Microsoft executive reportedly said: "I think there's a real chance he ends up being remembered as a fraud on the scale of Bernie Madoff or SBF."
The Two Faces of Altman
A former OpenAI board member described two qualities he observed in Altman, in what may be the most cutting character sketch in the entire report.
Altman, this board member said, possesses an extraordinarily rare combination of traits: in every face-to-face encounter, he radiates an intense desire to be liked and to win the other person over. At the same time, he displays what borders on sociopathic indifference to the consequences of deceiving them.
Both traits coexisting in one person is vanishingly rare. But for a salesman, it is the perfect gift.
The report draws an apt comparison: Steve Jobs was famous for his "reality distortion field," his ability to make the world believe in his vision. But even Jobs never told a customer, "If you don't buy my MP3 player, everyone you love will die."
Altman has said things along those lines, about AI.
One Man's Honesty, Everyone's Risk
If Altman ran an ordinary tech company, these allegations would be little more than riveting business drama. But OpenAI is not ordinary.
By its own admission, it is building what may be the most powerful technology in human history, one that could reshape the global economy and labor markets (OpenAI itself recently published a policy white paper on AI-driven job displacement), and one that could equally be used to engineer mass-scale bioweapons or launch devastating cyberattacks.
Every meaningful safety guardrail is now hollow. The founders' nonprofit mission has been subordinated to the IPO push. The former chief scientist and former head of safety have both concluded the CEO cannot be trusted. A key partner has compared the CEO to SBF. Given all of this, on what basis should one person unilaterally decide when to release AI models that could reshape the trajectory of civilization?
Gary Marcus, an AI professor at New York University and a longtime advocate for AI safety, wrote this after reading the report: If a future OpenAI model could produce mass-scale bioweapons or trigger catastrophic cyberattacks, are you truly comfortable letting Altman alone decide whether to release it?
OpenAI's response to The New Yorker was terse: "Much of this article rehashes previously reported events, relying on anonymous claims and cherry-picked anecdotes from sources with obvious personal agendas."
A quintessentially Altman-style reply: no engagement with specific allegations, no denial of the memos' authenticity, just an attack on the sources' motives.
The For-Profit That Ate Its Own Mission
OpenAI's decade-long arc, reduced to a story outline, goes like this:
A group of idealists alarmed by AI risk founded a mission-driven nonprofit. The organization produced extraordinary technical breakthroughs. The breakthroughs attracted enormous capital. Capital demanded returns. The mission gave way. The safety team was disbanded. Dissenters were pushed out. The nonprofit structure was converted into a for-profit entity. The board that once had the power to shut the company down is now stacked with the CEO's allies. The company that publicly pledged 20% of its compute to protect humanity now has PR staff saying, "That's not an actual thing."
The protagonist of this story has been given the same label by more than a hundred people who were there: "unconstrained by truth."
He is preparing to take the company public at a valuation north of $850 billion.
This article synthesizes publicly reported information from The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.
If you find this helpful, feel free to follow us for future updates. ❤