Liquid neural networks could help us reach the next level of efficiency with AI/ML
Big data visualization. Sorting Big Data information groups. Machine learning and algorithms, … [+]
Many of us can agree that over the past few years, progress in AI/ML has been rapid. Today we have another new solution to power what we do with AI and machine learning.
Some of the biggest news in the tech community involves new types of neural network models that are fundamentally different from what we’ve seen over the past decade.
I’m particularly proud of this company because the original team developing these solutions is at MIT’s CSAIL lab.
sdfasd
But even if it came from somewhere else, it would still be groundbreaking news that everyone should know.
Let me start with an explanation of two successive types of innovative neural networks.
The first are called ‘liquid neural networks – they are able to learn on the job and process information continuously. The research teams involved say they are based on the brain functions of small species, such as rodents and birds, and meet four specific criteria:
· Flexible
· Causality
· Robust
· Explainable
The second criterion is very important, because it explains a lot of how these networks operate with far fewer nodes than traditional designs.
The fourth criterion is also extremely important, because it preserves the idea that we should not build black-box AI systems – that we should know why they do what they do, while they do it. make.
sdafdf
Now, this introduction of liquid NNs made waves some time ago, but what we recently revealed on stage is called “closed-form continuous-time” or CFC models.
These use a liquid neural network design, but there is a key addition: researchers have figured out how to solve differential equations simulating the interaction of two neurons via synapses (by applying, in the models, synapse designs “sigmoidal”).
By applying differential equations to each node, these new networks can do the same kinds of advanced things as a traditional network with 1,000 or 2,000 neurons. But here’s the big news: They can perform these tasks with something like 19 neurons, plus a perceptual model. If this seems oddly specific to you, read on…
At a recent conference in Davos, CSAIL Lab Manager Daniela Rus and a panel spoke about their understanding of this developing technology:
“I was really passionate about how we could build AI systems that were not only highly accurate, but also reliable and efficient enough to be able to solve the most important problems in a sustainable way,” said scientist Alexander Amini and co-founder of MIT. liquid AI. “We’re really excited about this technology because it’s a new kind of fundamental model: it’s very powerful, efficient and deployable at the edge. »
asdfsadf
Rus talked about this new approach to machine learning that makes models more worthy of running safety-critical systems.
“The end result is very compact solutions to very complex problems,” she said. Companies, she added, can deploy these models internally and run them behind a firewall, or deploy them on edge devices.
“They are cheaper and have a lower carbon footprint,” she said of these systems in general.
She also explained how these models resolve cause and effect – for decision making, for algorithmic efficiency, and how they can probe systems and explain behavior.
“Every node is more powerful,” said Ramin Hasani, also co-founder of Liquid AI, speaking about the value of compression in these systems. “You throw a lot of data at them. »
In terms of application, Hasani said pioneers have already started making connections.
“We have pipelines, we have infrastructure; we speak directly with businesses,” he said.
To return to some of the work carried out by the teams, we have the creation of “neural circuit policies” based on the nervous system of the nematode C elegans (a kind of cousin of the flatworm? Learn more at it’s on github.)
In public resources, you can see fully connected, random, or NCP models, as well as some of the code behind these types of systems.
Another way to explain this is that continuous-time hidden states allow these algorithms to operate differently on the input data. For example, when talking about driving an autonomous vehicle, the researchers suggest that better networks would look not at the horizon where the road is, but where the bushes are as the vehicle advances through the road. ‘space…
It seems to me that this is all very interesting and fascinating work, based on the above: as we continued to study neural network models, we continued to find better applications and ways to make them more efficient. But this new project is truly a game-changer, and I’m proud to be associated with the same institution where these people are working to expand our collective knowledge of AI’s capabilities.
(Disclosure: I am an advisor to LiquidAI.)