Graphic with a human brain and waves
Elon Musk sued OpenAI Friday, alleging the company had strayed from its original mission, shifting from a nonprofit focused on benefiting humanity to a for-profit organization benefiting Microsoft and others. Although only time will tell how the trial will play out, two artificial intelligence concepts are at the heart of this case: generative AI and artificial general intelligence. What is the difference and why has the relationship between the two caused so much controversy?
OpenAI agreement with Microsoft does not extend to AGI, and the OpenAI governing body retains the ability to determine when AGI is reached. Meanwhile, the November ouster (and return) of OpenAI CEO Sam Altman was sparked by the belief that OpenAI had reached AGIand some think that it is also at the heart of Musk’s trial. One reason for this confusion is that neither genAI nor AGI are well defined, and we are quickly reaching the point where humanity will need more precise definitions of these terms.
What is AGI?
During the Open AI CEO firing drama, I wrote an explainer on AGI, which includes OpenAI Chief Scientist Ilya Sutskever’s definition of AGI at TEDAI 2023.
- He described a key principle of AGI as being potentially more intelligent than humans at anything and everything, with all the human knowledge to back it up.
- He also described AGI as having the ability to self-learn, thereby creating new, even potentially more intelligent AGIs.
What is GenAI and where does it come from?
GenAI took the world by storm with the release of ChatGPT in November 2022, due to its ability to generate coherent texts answering questions, form poetry, create images and write code, to never cite just a few examples.
Although genAI takes many forms depending on the type of content it generates, the current AGI drama focuses largely on large language models (the type of genAI that generates language and associated content like the coded). This technology started with a type of AI called natural language processing. The AI reads all the text it has and learns to complete sentences. For example, if I start with the words “I’m very hungry because”the AI will learn that “I haven’t eaten today.” is a much more likely completion than “I didn’t play golf today.”. New text is generated via this completion mechanism. This approach has also demonstrated its potential for generating code and even for accelerate drug discovery.
A second technology, reinforcement learning with human feedback, is added, with humans teaching the AI the difference between good and bad successes. For example, completions “I haven’t eaten today.”“no one gave me anything to eat”, or alternative complements that say the same thing offensively, are all theoretically reasonable. However, a human can tell the AI that the former is most likely to be acceptable to a diverse human audience (OpenAI’s customers). Over time, the AI learns not only to complete sentences, but also to determine which complete sentences will be better received by human standards.
The combination of these technologies has led to the powerful linguistic AIs we see today, which seem eerily human in their textual responses.
How close are genAI and AGI?
While AGI is still considered a goal by most, genAI has most certainly arrived. The key is how the evolution of genAI can create AGI. If we look at the two key AGI principles above, have they been achieved?
- The first, an AI that would draw on all human knowledge, is halfway there. GenAI today is capable of consuming most public data in a single model (the AI version of a brain). However, it is better at language-oriented tasks than math and logical reasoning. A lively debate exists over whether genAI (and large language models in particular) are simply “stochastic parrots”, capable of convincingly imitating human language without understanding it. The AGI “scare” linked to the OpenAI CEO drama was due to a specific improvement (Project Q*), which engineers considered a step forward in AI by performing the mathematical reasoning considered essential to AGI.
- The second, the possibility of creating new AGIs, is also on the way. In November 2023, OpenAI introduced GPT, a form of agent that can be created to be a personal assistant for any purpose. It is also now possible to create code with ChatGPT and automatically run that code to perform actions in the real world. Imagine a GPT that can’t just read your email and suggest responses, but will actually respond for you and possibly have multiple email exchanges without your direct involvement. For many people, this is closer to AGI.
Where do we go from here
The definitions of genAI, AGI, and their associated capabilities depend on who you ask. The only thing we can be sure of is that genAI is evolving rapidly and every day brings a new capability that can be considered a step towards AGI. The Musk/OpenAI lawsuit may force more details about the development of AI technology into the public domain, and it could even create a legally binding opinion on what AGI is. At least it will keep this crucial topic in the public eye, which I think will help us think more seriously about this technology and its impact on humanity.