OpenAI was thrown into turmoil on Friday when CEO Sam Altman was abruptly fired by the company's board of directors. Hours later, President Greg Brockman resigned in protest, signaling growing divisions within OpenAI's leadership.
The moves shocked many both inside and outside OpenAI, given Altman and Brockman's central role in guiding the company's meteoric rise to prominence on the back of its powerful natural language AI. But reports suggest simmering tensions had emerged between factions within OpenAI over the pace of commercialization and approach to AI safety.
According to a joint statement from Altman and Brockman, the chaotic events unfolded rapidly, orchestrated largely by board member Ilya Sutskever. Sutskever texted Altman midday Friday requesting a call, then informed him during the meeting that he was being removed as CEO, effective immediately.
Just minutes later, Sutskever notified Brockman that he was being ousted from his role as board chairman but could remain president. The board published a blog post announcing the leadership changes around the same time, blindsiding partners like Microsoft and even OpenAI's own employees.
In the wake of the leadership shakeup, interim CEO Mira Murati sought to reassure employees. In a memo on Friday, she referenced “our mission and ability to develop safe and beneficial AGI together.” Murati outlined three pillars going forward: “maximally advancing our research plan, our safety and alignment work—particularly our ability to scientifically predict capabilities and risks, and sharing our technology with the world in ways that are beneficial to all.”
Her message underscored that responsible AI development is still their priority during this period of transition. However, it remains to be seen whether OpenAI can maintain focus on its core values while attempting to chart a new course amidst internal power struggles. Restoring faith in the organization’s governance and oversight will be key.
The abruptness and secrecy surrounding the board's moves raise questions about its motives and adherence to proper governance procedures. Sutskever appears to have wielded outsized influence in engineering the ousters of Altman and Brockman. But his rapid consolidation of control, while usurping standard process, risks fostering distrust within OpenAI's ranks.
According to sources, disagreements had intensified between researchers advocating greater caution around AI development and those eager to press forward with commercial products. The Information reported that "divisions persisted over AI safety" noting that co-founder Ilya Sutskever led a team focused on identifying AI risks.
Three senior researchers—Jakub Pachocki, Aleksander Madry and Szymon Sidor—resigned after the leadership shakeup, signaling their allegiance to the ousted leaders. Madry had just been appointed head of a new preparedness team charged with assessing potential dangers from AI systems.
In damage control conversations with employees, remaining leaders stressed OpenAI's commitment to developing AI safely. But the controversial nature of Altman and Brockman's removal raised questions. Sutskever, who orchestrated the board vote to remove them, faced pointed queries about whether this constituted a "coup" from employees concerned about the company's direction.
“You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” - The Information
Ultimately, OpenAI's board faces a crisis of credibility that threatens to undermine the organization's mission. While change may have been needed, the chaotic, opaque nature of Friday's events raised red flags.
Responsible leadership requires transparency. If the company hopes to recover from this debacle, they have some explaining to do. Vaguely waving off the CEO's abrupt firing as communication issues just doesn't cut it.
As an organization developing technologies with immense potential for benefit or harm, trust in OpenAI's motives and integrity is vital.
If Altman truly failed to fulfill his duties, the board should elaborate on how. If substantive concerns did exist, the board should clarify what steps were taken to address these prior to termination. It must also convince employees and the public that proper governance procedures were followed in removing him and Brockman.
Most importantly, by blindsiding long-term investors and partners like Microsoft with the news, the board severely damaged important relationships and eroded crucial goodwill. The company's leaders must work to reestablish faith that they are acting in the best interests of the company's charter and vision. The fallout from the firing is just beginning.