OpenAI assembles team to assist in AI assessment

Alex Omenye
Alex Omenye

OpenAI on Tuesday assembled a Red Teaming Network, a team of specialists to assist in informing the company’s risk assessment and mitigation tactics for AI models.

As AI technologies, particularly generative technologies, become more widely used, red teaming is turning into an increasingly important step in the building of AI models. Red teaming can uncover text-generating models like ChatGPT and GPT-4 that ignore safety filters as well as models like OpenAI’s DALL-E 2, which has been shown to enhance racial and sexual stereotypes.

According to OpenAI, it has already tested and benchmarked its models with outside specialists, including members of its bug bounty programme and researcher access programme.

The Red Teaming Network, on the other hand, formalises those initiatives with the aim of “deepening” and “broadening” OpenAI’s collaboration with researchers, academic institutions, and non-profit groups, claims the business in a blog post.

OpenAI is inviting participation from a wide spectrum of subject matter specialists, including those with training in languages, biometrics, finance, and healthcare.

No prior knowledge of AI systems or language models is necessary to qualify. However, the company issues a warning that chances offered by the Red Teaming Network can be subject to non-disclosure and confidentiality agreements that might affect other studies.


TAGGED:
Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *