One day before he was fired by OpenAI’s board last week, Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.” The cryptic remarks at the APEC CEO Summit went largely unnoticed as the company descended into turmoil.
But some OpenAI employees believe Altman’s comments referred to an innovation by the company’s researchers earlier this year that would allow them to develop far more powerful artificial intelligence models, a person familiar with the matter said. The technical breakthrough, spearheaded by OpenAI chief scientist Ilya Sutskever, raised concerns among some staff that the company didn’t have proper safeguards in place to commercialize such advanced AI models, this person said.
THE TAKEAWAY
• OpenAI researchers made a breakthrough in recent months that could lead to more powerful AI models
• Researchers used the new technique to build a model that could solve math problems it had never seen before
• The breakthrough raised concerns among some OpenAI employees about the pace of its advances and whether it had safeguards in place
In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used > Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.
The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives.
In the months following the breakthrough, Sutskever, who also sat on OpenAI’s board until Tuesday, appears to have had reservations about the technology. In July, he formed a team dedicated to limiting threats from AI systems vastly smarter than humans. On its web page, the team says, “While superintelligence seems far off now, we believe it could arrive this decade.”
Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.
In addition to Pachocki and Sidor, OpenAI President and co-founder Greg Brockman had been working to integrate the technique into new products. Last week, OpenAI’s board removed Brockman as a director, though it allowed him to remain as an employee. He resigned shortly thereafter, but returned when Altman was reinstated.
Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models, according to the person with knowledge, a major obstacle for developing next-generation models. The research involved using computer-generated, rather than real-world, data like text or images pulled from the internet to train new models.
For years, Sutskever had been working on ways to allow language models like GPT-4 to solve tasks that involved reasoning, like math or science problems. In 2021, he launched a project called GPT-Zero, a nod to DeepMind’s AlphaZero program that could play chess, Go and Shogi. The team hypothesized that giving language models more time and computing power to generate responses to questions could allow them to develop new academic breakthroughs.
An OpenAI spokesperson declined to comment.