In the afternoon of our summit in Davos, we had a very interesting discussion on the future of AGI (or call it something else if you prefer, as some panelists suggest…we’ll think about it) during which we had There were Yann LeCun, vice president of Meta and chief AI scientist, and Daniela Rus, director of our MIT CSAIL laboratory. We also had Connor Leahy, CEO of Conjecture, and Stuart Russell from Berkeley. The subject? Keeping AI under control (what a Pandora’s box!)
But first, there was a moment early on where there was a sense of where AI was evolving, particularly in the area of voice cloning, which we’re only now beginning to think about in terms of the ramifications. The moderator, Max Tegmark (also at MIT) asked each of the panelists to pronounce their name – and from these few syllables he was able to generate very elaborate voice clones of them giving presumably made-up opinions!
After that, the panel started talking about how to control these very powerful technologies…
“Human intelligence is very specialized,” LeCun said. Instead of AGI, he added, we should be talking about “human-level AI” and, he suggested, we’re not there yet. The process, he said, would be to make machine learning as effective as that of humans and animals.
“It’s helpful,” he said. “This is the future, but we have to do it right.”
“We want to improve our tools,” Rus said. “We want to try to understand nature. The way forward is to start with a simple organism and go from there.”
Some have not been as quick to lead the way, at least not in the same time frame. Russell emphasized that there is a difference between knowing and doing, in order to address our need to act with caution.
“It’s an important distinction,” he said of the two fundamental principles engineers are built on, adding that there should be limits to what we know, limits to what we do and limits to how we use what we know in the form of technologies.
He gave the example of every human person having an LLM in their pocket capable of thinking like a human. I thought: we are indeed taking advantage of this new opportunity, but what will it mean?
“Should everyone have this ability? he said: “(and) is it a good idea to build systems that outperform humans?
LeCun suggested it was too early for these kinds of questions, saying we don’t currently have a “plan” for a human-level AI system. “It’s going to take a long time,” he said, comparing current discussions to debates about future technology in 1925.
When we have a plan for technology, he suggested, we will also have a plan for control.
“Evolution built us with certain motivations,” LeCun noted. “We can build machines with the same engines… you can set goals for the AI, and it will achieve those goals. »
The idea that AI could somehow “take over humanity” strikes him as “absurd” and “ridiculous” – even though it’s actually what’s stopping many from other people to sleep!
Meanwhile, Stuart spoke of a “flawed methodology” which, if misapplied, could have catastrophic consequences, and illustrated how some of these plans can go wrong.
“It becomes impossible to properly specify this goal,” he said, describing a scenario in which we might not be able to fully guide the process. “We are moving forward, but we have absolutely no proposals on how to make these systems safe and beneficial. »
Leahy added this: “What makes technology useful is what makes it dangerous,” he said, comparing AI to other technologies like nuclear weapons, or even biological weapons. “The best and worst things can happen. »
LeCun responded that we can imagine all kinds of dystopian scenarios, but that past technologies had prototypes that we were able to control, and AI might be the same.
“There are mechanisms in society to stop the deployment of technologies that are really dangerous,” he said.
When Russell suggested, again, that it was reasonable to consider the risks of AI, Rus agreed, but explained how some solutions to ML problems have been solved, to some extent, e.g. covering the threat of bias in these systems.
“There’s really great progress,” she said. “I am… optimistic about the use of machine learning and AI in safety-critical applications.”
Here’s an interesting part that happened at the end, as we were preparing for the next session:
Panelists also individually called for new architectures, in response to Tegmark’s urgings.
LeCun spoke of “goal-driven AI” with “virtual guardrails” that are not susceptible to hijacking by black hats.
Rus spoke of “liquid networks” which, she suggested, have good attributes, such as being causally proven, interpretable, and explainable.
Leahy spoke of “social technology” that doesn’t circumvent the idea of the human in the loop.
“The world is complicated,” he said. “It’s both a political and a technological problem.”
I came away from this session thinking about the big question: will we be able to harness AI to our advantage? And what happens if we can’t? These panelists each have a lot of experience and insight. It’s worth the time and thought.