New research is making things much clearer and giving us a real view of what is possible.
If you didn’t already think we’re headed for lightning-fast innovation, just check out these three sources…
I know it seems like people keep beating the drum – saying we’re going to have all kinds of intelligent robots taking over aspects of our lives in the next few years – but if we’re so vehement about it, This is because people with inner sight can really see this happening and know what it means.
I wanted to write this blog post to show how it works – how to convince the average person that they should really care about the groundbreaking work that is being done!
So, with that in mind, how do we know exactly how quickly AI is being integrated into our society?
The first source is your general internet commentary. For example, look at this Article from Tom’s Guide talking about 2024 and how it will be the year of adoption.
Company using AI tool, intelligent robotics technology and artificial intelligence technology, and … [+]
Alexandre Amini
2023, the author claims, was the year in which we learned theoretically how major language models work. 2024, on the other hand, will be the year we see markets fundamentally renovated.
“We’re going to see generative AI in refrigerators, toys, exercise equipment, lawn mowers, and in our cars,” Ryan Morrison wrote just after last Christmas. “Chatbots will allow us to interact with objects in the same way we talk with ChatGPT today, and AI vision technology will give devices the ability to see what we are doing… The reality is that we have just seen a year where the floodgates of decades of research were revealed. New technological breakthroughs were constantly occurring and investment was reaching record levels.
Interesting…
Here’s the second source, and this is an important one – I want to highlight a lot of what MIT scientist Alex Amini said in a recent talk during the MIT Venture Studio just a few days ago on the direction AI is heading this year.
His prediction? This summer we will see these great enterprise applications!
Amini is a leader in a lab where researchers are working on new types of networks called liquid neural networks – where new types of artificial neurons have the ability to process information continuously and scientists use a new differential equation to represent the interaction between two artificial neurons. neurons via simulated synapses.
He says many of the ramifications will be evident this year.
“What will this ecosystem look like in two years? Or even a year? he asks. “Will it be liquid neural networks that completely replace transformers, and will transformers be obsolete? I think it’s very likely that transformers will be obsolete in the near future.”
Alexandre Amini
Importantly, he also has some thoughts on the fast-approaching regulation of this technology.
“If you look at, basically, the US regulations on large language models, it fascinates me, because the way they judge a better performing language model is purely based on how many flops it requires, how much computation who is using it. for training. …And for me, that’s like a totally retrospective way of thinking, right? …It doesn’t matter how much math you use to train…It’s just that it’s the best metric we have today for judging these things.
Amini goes into a very in-depth description of how this all works, and we can talk about that later, but in general he talks about the model of using transformers for neural network models and how that could soon become obsolete, despite (his example) Facebook’s big investment in Nvidia chips.
In the near future, he said, we will build and evaluate models based not on scale, but on capabilities.
He also talked about an idea of an “expert mix” where different components would work with each other to do the kind of deep cognitive work we associate with the human brain.
“We have these feedback systems… and… one is, maybe not contradictory, but the other has… insight (into) the first one,” he says. “And you can use that to improve the quality of the first (system). I think one really exciting thing I’m seeing from OpenAI is this real investment in the idea of ”expert mixing.” …don’t just train a model, but train a model with multiple paths through that same model, so you can combine different concepts and knowledge bases. And the model can essentially choose… which path to take to answer a given question. this diversifies knowledge. And that helps with a lot of things, including sturdiness. And when you think about this contradictory training, or the contradictory goal of these models, that also becomes particularly important.
All of this really tells us about how quickly AI is evolving, but here’s a third source that I think will surprise you!
As I wrote this, Microsoft Copilot appeared, unprompted, on the right of the screen.
This opened the question: “Is 2024 the year of AI?” – then provided a response, citing the events of this year’s CES show, and more.
“AI is getting better at returning relevant content from prompts,” the non-human sentient model wrote, “whether it’s textual information or high-quality images.” This year we will see the greater impact of AI and its reach into the lives of more people.
And this was said in the Queen’s English, or the King’s English, if you prefer. In fact, when I asked the AI to tell me which of these nicknames I should use, it responded with a whole host of help, including this:
“Interestingly, the Queen herself doesn’t strictly adhere to the Queen’s English these days,” the AI told me in a calm, professional manner. “His accent has evolved during his reign, and even the BBC now features a range of regional voices. So even though the Queen does not use Queen’s English, it remains a characteristic of the United Kingdom’s upper classes. Additionally, there is a grammatical oddity known as the royal or majestic plural “we”. This involves referring to oneself with the plural pronoun “we” instead of the singular “I”. … As for “King’s English”, it is a less commonly used term. Historically, it referred to the English language as being written and spoken correctly by educated people in the United Kingdom when the country had a king. The standard title for English monarchs from Æthelstan to John was “King of the English”. Later, in Norman times, “King of the English” remained standard, with occasional use of “King of England”. In summary, Queen’s English and King’s English denote formal and correct linguistic usage associated with royalty and the upper echelons of society. However, language evolves, and even the royal family adapts to changing linguistic norms! »
In other words, if you’re not convinced that AI is taking over, just ask AI what it’s going to do this year – and it will tell you for itself, in a way which is really quite efficient, undoubtedly more efficient than average. no one on the street.
Get where I’m going?
We’re seeing this work and evidence up close and personal, with all the experts weighing in on these new models and showing us exactly what the roadmap is going to look like. It is up to us – for example, regulators and the business world – to pay attention.
(Full disclosure: I am an advisor to LiquidAI, the group at MIT that is building new forms of networks similar to some of those discussed above.)