The artificial intelligence lab OpenAI said that it is assembling a new team to establish democratic methods to determine how its AI software should be controlled to address prejudice and other concerns.
In a blog post on Tuesday, the company said the “Collective Alignment” team is carrying on the activities of a grant program that was introduced in May 2023 to finance experimentation with democratic processes.
“As we continue to pursue our mission towards superintelligent models who potentially could be seen as integral parts of our society … it’s important to give people the opportunity to provide input directly,” a research engineer and founding member of OpenAI’s new team, Tyna Eloundou, told Reuters.
With the public’s fascination with generative AI technology that can produce remarkably authoritative literature from text prompts, OpenAI’s ChatGPT program, which launched in late 2022, has become one of the fastest-growing applications ever.
However, there are worries that AI may produce “deepfake” images and other false material, particularly as the 2024 US presidential campaign heats up.
Critics argue that the inputs used to form the opinions of AI systems such as ChatGPT are inherently biased, and users have discovered instances of racist or sexist outputs from AI software.