OpenAI has warned that although superintelligent systems promise significant benefits, they also pose “potentially catastrophic” risks.
To reduce these dangers, the ChatGPT maker proposed extensive empirical research on AI safety and alignment, including exploring whether the entire industry should “slow development to study these systems more carefully.”
The company further cautioned that the field is edging closer to creating systems capable of recursive self-improvement.
“Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work,” OpenAI said in a blog post.
OpenAI’s comments suggest that continual learning in AI systems may be approaching reality.
This concept, also known as recursive self-improvement, has long been viewed as one of the main obstacles to achieving artificial general intelligence, the point at which AI systems can outperform humans in most tasks.
However, AI researcher Andrej Karpathy has noted that AGI could still be about a decade away, as many unresolved challenges remain.
“They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues,” Karpathy said in a recent appearance on a podcast.

