One of the frustrations of ChatGPT (and other generative AI chatbots) is that they don’t remember what we told them in previous sessions. This means starting from scratch with each new chat, teaching him things about yourself, your business, or anything else you want him to know that he won’t find in his training data or through web browsing. .
Well, it looks like all that is about to change. In what could prove to be its most significant update yet, OpenAI – the creators of ChatGPT – has Just announced that we must give him a souvenir.
But while I think it will prove extremely useful to the millions of us who use it every day, it also raises some important concerns. How reliable will this memory be? What does this mean for data privacy? And are we ready for AI capable of developing long-term memories, bringing them even closer to human intelligence?
Why does ChatGPT need memory?
Just like with the addition of web browsing to ChatGPT’s capabilities last year, adding memory is more than just a regular incremental update. This could potentially alter his behavior and abilities in several ways.
So far, when ChatGPT generates a response, all the information it needs to consider comes from three data sources: training data, user input during the session course and (if using Web browsing mode) the Internet.
In effect, this update adds a fourth source: a long-term memory that persists between sessions and contains information that could make one’s responses more valuable in future discussions. This may include, for example, the user’s name, profession or personal likes and dislikes.
Having this long-term memory as an additional source of information means that users won’t have to enter this data repeatedly every time they start a new session.
This should mean there will be less need for lengthy, detailed prompts that must be re-entered each session to ensure a consistent outcome.
The pop-up window
The pop-up is the technical term for all the information that ChatGPT can “see” when it creates its responses.
If you’ve ever had a long conversation with him and noticed that he ends up forgetting things you’ve already said, it’s because you’re running out of space in the pop-up. GPT-4 – ChatGPT’s most powerful model – has a popup of 8,192 tokens.
OpenAI has not yet clarified whether this new long-term memory will be loaded into the existing pop-up window, or whether the pop-up window will be enlarged, effectively making it “free” information.
Increasing the amount of information ChatGPT can process in a conversation means it will be able to perform longer, more complex, and more detailed tasks.
How will this improve generative AI?
There are many ways to give generative AI long-term memory to improve it as a tool:
· This means it could improve its learning ability over time, as it stores information from past interactions and uses it to inform future conversations.
· It can make its responses more personal, as it learns more about the user and understands the details of how they like to work and solve problems.
· Conversations will have improved continuity, as it memorizes facts and information from previous discussions without needing to recall them.
· It comes closer to being able to provide responses that demonstrate emotional intelligence, as it can help develop a long-term understanding of users’ emotional responses.
· He can become better at making decisions because he remembers how previous actions and interactions affected outcomes.
All of this could also help us develop a deeper sense of connection with AIs, allowing us to make better decisions about trust.
But what should we worry about?
Despite all the benefits, there are also a number of important concerns that need to be considered.
Probably the most pressing concerns data privacy and security. Due to the nature of the information we want to store, most of this new data will be personal information – things that are specific to us as users and human beings. For example, OpenAI’s press release states that the tool will avoid actively recalling sensitive data, such as your health information, unless users specifically request it.
It also says it will give users granular control over what information can be retained and what information can be returned to them in order to better train their systems. This sounds good, but we still need to see how it will work in practice. We can only hope that other AI tools now rushing to add memory will also be as conscientious.
There are also ethical concerns about deciding what an AI should remember or forget. Even though the user has control, the tool itself may still need to make decisions regarding other people’s personal information.
Another, perhaps longer-term, concern is how this will affect the way we use technology in general. Does this bring us closer to AGI? Memory is an essential characteristic of natural human intelligence, and while ChatGPT has always had some sort of memory – its training data could be considered such – this takes it a step further by allowing it to remember things about people as individuals. Coming to terms with all the ethical and cultural implications of this situation will be a complex task.
However we manage to solve these problems, it is certain that we will remember it as an important development in the history of AI, and now it seems that AI will remember it too.