In my first class this semester from the MIT AI Venture Studio course, we got some pretty interesting insights into the current state of artificial intelligence, and students heard from some industry leaders about priorities as we move forward. Let’s move on to the next phase of this rapidly evolving field.
Ramesh Raskar gave us an overview of what’s happening with AI (he’s the leader of the class), talking about a sea change towards models that are going to be more powerful than what we’ve seen so far .
He described the very different use cases for large language models, as opposed to what we get from generative AI.
But perhaps more relevantly, he talked about the shift from supervised learning to unsupervised learning, and from on-screen learning that happens on a device, to three-dimensional learning that will happen more close to the real world.
What I mean by that is that when you integrate this type of technology with robotics and autonomous agents that can move around autonomously, you get a very different type of AI – for many people, a type of AI much scarier. When moving from supervised learning to reinforcement learning, where there is no clear distinction between labels and what the program takes from its test or training data, things can get even stranger.
Raskar also compared what he called “sprinkle AI” or frivolous tinkering around the edges with more substantial three-dimensional artificial intelligence, where the use cases will be much more obvious.
In business terms, he highlighted three elements of current AI work: niche applications, platforms and particular use cases. Returning to the concept of “AI on screen” where the technology operates on a screen interface, he suggested that without strong internal technology, some of these applications are little more than window dressing.
“They’re easy to build,” he said of “Screen AI” products, “but also very easily upset, right, because someone else… can build a solution similar, and as long as they have the tenacity, they can beat you.”
For example, he talked about Uber: how dispatch algorithms are at the heart of the business and the secret sauce that people won’t be able to replicate.
In describing this type of competitive strategy, Raskar pointed out that there is a lot of money in this area – about $99 trillion over five years!
It is important, he said, that the work is done responsibly, safely and ethically.
So what do these new 3D AI projects look like?
Moving on to describing a 3D use case, he talked about headphones, a camera and other equipment for first responders, with a focus on AR, in what you can imagine that would look a lot like what you have in the old Terminator movies. Unless, of course, used wisely.
Back to Uber and how the new tech economy is going to work: Raskar talked about the need to pursue three steps in AI development: capture data, analyze data, and engage, by which he meant probably make your project known. , functioning.
For the concept of “data capture,” he distinguished between taxis, the old system, and the new and disruptive Uber.
The difference, he argued, is that there is no data capture in the taxi system – at least none, strictly speaking. Although new taxis are equipped with card systems, traditionally there was no digital component: you paid for fares in cash based on the meter. Now, with Uber, everyone’s trip data is taken into account, scrutinized by machines – and the machines are about to become much smarter!
Later, we also got some insights from Beth Porter, who spoke about educational technology and AI for neurodivergence.
“If you know anyone who has a child with autism,” she said, “if you know anyone who has children with ADHD, you also know that millions of hours are spent frustratingly trying to help students engage meaningfully in a variety of activities. content experiences.
Many of these activities, she says, are relatively ineffective because they are not presented in the right formats or are not well targeted to the needs of neurodivergent students.
“It doesn’t provide the right kind of feedback, it doesn’t feel like something they can connect to,” she said.
Porter encouraged students to think about the issue holistically and see what types of learning can help people with these disabilities. She noted that this doesn’t have to come through traditional models like text and voice. Some, she said, could be done through images or videos. AI for neurodivergence, she suggested, could be linked to augmented reality and other similar types of projects.
Hossein Rahnama spoke to us about what new career professionals can do to advance their goals and those of the community.
He suggests working on the core of the project, not just the interface.
Using the term “co-creation,” he explained how people should imagine that others are using their ideas to come up with secondary applications.
He also talked about the value of everyday users for technologies – and compared that to how to do it with B2B software or AI products.
Whichever path the students chose, Rahnama encouraged them to embrace innovation. “Bring your passion,” he said, speaking about the value of improving the patient experience in healthcare and other use cases.
After Rahnama, Sandy Pentland (who started the course over 20 years ago) came to talk about perspective-aware computing and other new advances.
“Don’t think small,” he said, encouraging students to “build something that affects a billion people.”
As for opportunities, he talked about breaking down silos in health care.
“You have to be able to tie (things) together,” he said. “There has to be AI on top of that.”
Citing the pandemic as a prime example, he noted that the response could have been much more robust with better data management.
“We didn’t share this data directly – we could have done a much better job,” he said.
He also spoke about microbiomes and RNA analysis.
Finally, we had an interesting contribution from Dave Blundin, who talked about some of the massive changes we’ll likely see in just a few years.
Blundin began by reflecting on his involvement with Lincoln Lab, which will be relevant to our conversation with Ivan Sutherland in another article – and how he turned to MIT, as a devoted fan of Marvin Minsky.
Blundin mentioned the problem of disparity, which he saw growing in Iran, and some of the paths to agile technology – he gave the example of Amazon supplanting Walmart, but which started as a small startup.
He also explained how to measure the advancement of AI at the speed of light.
“How much of your life did you spend last year talking to an AI?” he asked, suggesting students count things like Siri interactions and predicting that this metric will increase year over year.
“We get thousands and thousands of customer service phone calls every day (from one of his companies, which he has made public),” he said. “We’re recording them all, of course, those are the ones we’re testing with, … those are going to move (to AI) very, very quickly.”
As for writing code, Blundin has some interesting thoughts on that as well.
“At OpenAi, 80% of the code is currently written by the machine,” he said, citing his recent conversation with Sam Altman, and suggesting that there is a consensus that this figure will reach 95% within a year. year or two!
This has all been extremely eye-opening, keep your eyes peeled for more information as we progress through 2024.