Google has announced the development of a new machine learning model designed to continually learn and improve itself.
The model, called ‘HOPE’, features a self-modifying architecture and is reportedly more efficient in long-context memory management than current state-of-the-art AI systems, according to a Google blog post published on Saturday, November 8.
Google said the project serves as a proof of concept for a new framework known as ‘nested learning’, which treats a single model as “a system of interconnected, multi-level learning problems optimized simultaneously,” rather than a linear process.
Researchers believe the approach could help overcome key limitations in today’s large language models, particularly their inability to learn continually — a major hurdle on the path to artificial general intelligence.
The announcement comes amid growing debate in the AI community over how close the field is to achieving AGI.
“We believe the Nested Learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain,” Google said.
Unlike the human brain, which continuously learns and adapts, today’s LLMs struggle to acquire new information without losing previously learned knowledge, a limitation known as “catastrophic forgetting”.

