A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altman’s recent ouster as CEO of OpenAI.
According to a Reuters report citing two sources acquainted with the matter, several staff researchers wrote a letter to the organization’s board warning of a discovery that could potentially threaten the human race.
The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid to commercialize the technology.
Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.
“Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” said Altman at a discussion during the Asia-Pacific Economic Cooperation.
He has since been reinstated as CEO in a spectacular reversal of events after staff threatened to mutiny against the board.
According to one of the sources, after being contacted by Reuters, OpenAI’s chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.
OpenAI could not be reached immediately by Fortune for a statement, but it declined to provide a comment to Reuters.
Trained to identify patterns and infer outcomes
So why is all of this special, let alone alarming?
Machines have been solving mathematical problems for decades going back to the pocket calculator.
The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.
Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.
By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.
Think of Google’s helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probability—this is a very rudimentary form of generative AI.
That’s why Meredith Whittaker, a leading expert in the field, describes neural nets like ChatGPT as “probabilistic engines designed to spit out what seems plausible”.
Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.
This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.
The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.