OpenAI, the AI research firm behind the popular generative AI chatbot ChatGPT, has dismissed two of its researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information. The pair were asked to leave following an undisclosed internal investigation, according to a report by The Information.
Aschenbrenner, a vocal advocate for safe AI development, was seen as a rising star on the OpenAI safety team and a close ally of OpenAI's chief scientist Ilya Sutskever. He's a prodigy that graduated Columbia University at age 19, and previously worked with Future Fund, a philanthropic organization affiliated with the effective altruism movement. This movement prioritizes addressing existential threats of advanced AI over short-term gains.
The other researcher, Pavel Izmailov, also worked on OpenAI's safety team before moving to the reasoning research team. There is no word yet on what information was leaked by the two researchers.
That said, one of the more prominent leaks in OpenAI's recent history involves a research project code-named Q*. Reportedly the company made a major breakthrough using a new technique to build a model that could solve math problems it had never seen before. While internal demos fueled excitement from some, others were concerned around not having proper safeguards in place to commercialize such advanced models. Sources said this precipitated the failed attempt by Ilya Sutskever and the previous board to oust CEO Sam Altman.
OpenAI has positioned itself as a leader in responsible AI development, aiming to ensure that AI benefits humanity. However, the optics of firing employees for sharing high-level information may seem ironic, given the company's name and mission. The research lab has faced criticism from Elon Musk and many in the AI community, Musk, one of its early backers, has expressed disappointment in the company's shift from its original commitment to openness and transparency. Musk has said that OpenAI has become a "closed source, maximum-profit company effectively controlled by Microsoft," which contradicts its initial goals as a nonprofit organization.
OpenAI has not publicly commented on the firings, and the researchers involved have not responded to requests for comment.