OpenAI said on Tuesday it is releasing a set of prompts designed to help developers make their applications safer for teenagers.
The company said the teen safety policy prompts can be used alongside its open-weight safety model, known as gpt-oss-safeguard.
Instead of building teen safety protections from scratch, developers can apply these prompts to strengthen the safety of the tools they create.
The prompts cover topics such as graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent roleplay, and age-restricted goods and services.
Designed as prompts, these safety policies can be applied to models beyond gpt-oss-safeguard, though they are likely most effective when used within OpenAI’s ecosystem.
OpenAI said it developed these prompts in collaboration with AI safety watchdogs, including Common Sense Media and everyone.ai.
“These prompt-based policies help set a meaningful safety floor across the ecosystem, and because they’re released as open source, they can be adapted and improved over time,” said Robbie Torney, Head of AI & Digital Assessments at Common Sense Media, in a statement.
OpenAI said in its blog that even experienced development teams often struggle to turn high-level safety objectives into clear, workable rules that can be reliably implemented.
“This can lead to gaps in protection, inconsistent enforcement, or overly broad filtering,” OpenAI wrote. “Clear, well-scoped policies are a critical foundation for effective safety systems.”

