Much of what we talk about about artificial intelligence and machine learning is what you might call “technical considerations” – which makes sense, because these are revolutionary technologies.
But AI will also be social – it will have a social context. One way to explain this is that with “humans in the loop” and assistive AI, the AI must be able to interact with humans in particular ways.
So what about the social purpose of AI research?
Well, first of all, there is a huge amount of anxiety around the capabilities of these systems and what happens if they become too powerful, socially or otherwise, to control… if you want a Halloween look In the future, check out this article on the fears of powerful AI – or you might tune in to see Sam Altman or someone big in the industry talk about these prospects to the congressional audience.
Among the most optimistic, however, are many questions about how to prepare for AI and build the first systems responsibly.
In a recent talk, Andrei Barbu provides some great ideas on what we can prepare for when building AI models.
He begins by observing that the social element is largely neglected in AI as it is studied today.
The example he shows us is of a robot bringing objects to people – starting with a box, then moving to a dog, then a person.
Context, he says, is crucially important, and this is clear from the example, in that we might be OK with an AI carrying the first two goods, but probably not the third.
Highlighting “quantitative and qualitative blind spots,” Barbu takes the example of benchmarks that are too simple and do not really help the program tackle more difficult distinctions. How much time do we spend on it? And what should we think about instead?
As for qualitative deficits, Barbu invites us to build social simulators, to see what these problems look like.
When you start building social reasoning models as nested Markovian decision processes, you see behaviors that are likely to occur that are good for observation.
What do these simulations look like? These are displays with agents, objects, and other components that show us what a program is likely to do in practice. And they are very valuable these days.
Explaining a zero-shot approach, Barbu explains how people can intuitively learn to play chess with just a basic knowledge of the game, a knowledge with many gaps. AI can too.
A little later in the video, he starts talking about the AI program facing different types of input, either alignment of actions to help the program learn, or misalignment, to hinder the program.
In turn, he suggests, this triggers “robotic reasoning” and results.
What about use cases?
Well, Barbu talks about an example of photosensitivity and transformations that can help people avoid problems associated with epilepsy.
He also suggests that we should have more information about robots that teach humans skills or help them learn new things. Ultimately, he says, robots can learn from us, work with us, and adapt to our goals.
The extent to which this is possible remains to be seen, and much will likely depend largely on whether we approach AI assertively, proactively, and develop the rules of the road that will lead to successful “convergence.” » in this space. But the design principles above wouldn’t hurt.
Portrait of Andrei Barbu