Do Language Models Dream of AI Memory?
.jpg)
Human brains are extraordinary, performing nightly tasks of memory consolidation during sleep. These processes involve sorting and storing significant memories while letting go of the meaningless. If artificial intelligence could mimic this, imagine the possibilities.
Bilt, a company facilitating local shopping, aims to turn this into a reality. With the help of Letta, a pioneering startup, Bilt introduced millions of AI agents capable of learning from past interactions. Using a method known as "sleeptime compute," these agents discern which data to retain for long-term memory and what to make readily accessible.
Andrew Fitz, an AI engineer at Bilt, notes, "We can update one memory block and alter the behavior of innumerable agents," explaining it as a way to finely tune agent contexts at inference time. Traditionally, language models "recall" only if information is in the context window, meaning users must re-enter prior chats for continuity.
This limitation significantly affects AI's memory, causing them to hallucinate or falter when overloaded. The human brain inherently stores and recalls valuable information, advancing like a constantly absorbing sponge. Charles Packer, CEO of Letta, states, "With language models, it's the opposite. Continuous loops lead to corrupted context," emphasizing the need for resets.
Packer, alongside Sarah Wooders, previously introduced MemGPT—aimed at differentiating short-term and long-term memory for language models. Their work with Letta extends this, letting AI learn autonomously. Bilt and Letta’s project is part of a broader ambition to enhance AI's memory capabilities, potentially increasing chatbots' intelligence and reliability.
Harrison Chase from LangChain, another firm enhancing AI memory, aligns this as a core of context engineering—determining what to place in the context window. LangChain gives companies varied memory storage options, making it integral to providing appropriate context to AI models.
Memory's role in AI extends to consumer applications as well. Earlier in February, OpenAI announced ChatGPT would remember details for a more customized user interaction, though specifics remain undisclosed.
The transparency in AI memory handling by companies like Letta and LangChain crucially aids developers, according to Clem Delangue of Hugging Face. Additionally, Letta’s Packer suggests AI should forget on demand, for example, if a user requests deleting specific project memories.
This evolving notion of AI memory invites a reflection on _Do Androids Dream of Electric Sheep?_ by Philip K. Dick. While AI entities have yet to match the flair of fictional replicants, their memory fragility remains similarly pronounced.