OpenAI, creator of ChatGPT, intends to commit large funds and establish a new research team to work on ensuring that its artificial intelligence is still safe for people to use.
“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder, Ilya Sutskever and head of alignment Jan Leike wrote in a blog post.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” He added.
The blog post projected that superintelligent AI, or systems smarter than humans, would develop this decade.
As a result, the authors argue that advances in so-called “alignment research,” which aims to make sure AI remains beneficial to humans, are required if humans are to be able to control the superintelligent AI.
Microsoft-supported OpenAI will devote 20% of the computational capacity it has secured over the following four years to resolving this issue, they stated.
Additionally, the business is creating a brand-new team dubbed the Superalignment team that will unite around this endeavour.