• Home  
  • OpenAI to introduce ID verification for advanced AI models access
- News

OpenAI to introduce ID verification for advanced AI models access

OpenAI may soon require organizations to complete an identity verification process in order to access its most advanced AI models, according to a support page quietly published on the company’s website last week. The process, dubbed “Verified Organization,” is described as a new way for developers to unlock access to cutting-edge capabilities on the OpenAI […]

OpenAI launches new developer tools as AI war thickens

OpenAI may soon require organizations to complete an identity verification process in order to access its most advanced AI models, according to a support page quietly published on the company’s website last week.

The process, dubbed “Verified Organization,” is described as a new way for developers to unlock access to cutting-edge capabilities on the OpenAI platform. To qualify, organizations must submit a government-issued ID from a country supported by OpenAI’s API. Each ID can be used to verify only one organization every 90 days, and not all applicants will be eligible, the company said.

“At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” the page states. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

The rollout of the Verified Organization system appears to be part of broader efforts by OpenAI to tighten security around its increasingly powerful tools. As AI capabilities continue to grow, the company has published reports detailing attempts to prevent misuse — including activity linked to foreign actors, such as alleged threats from North Korea.

There may also be intellectual property concerns at play. A Bloomberg report earlier this year revealed that OpenAI had launched an internal investigation into possible data exfiltration through its API. The suspected actors were reportedly linked to DeepSeek, a Chinese AI research lab, and may have used the data to train competing models — a violation of OpenAI’s terms of service.

In response to escalating concerns, OpenAI blocked access to its services in China during the summer of 2024.

The verification process, which reportedly takes just a few minutes to complete, is being positioned as a way for developers to prepare for the company’s “next exciting model release.”