OpenAI, along with industry leaders such as Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, has pledged to incorporate robust child safety measures in the development, deployment, and upkeep of generative AI technologies in accordance with the Safety by Design principles. This effort, spearheaded by Thorn, a non-profit organization dedicated to safeguarding children from sexual abuse, and All Tech Is Human, an organization focused on addressing the complex issues between technology and society, aims to reduce the risks that generative AI poses to children. By embracing comprehensive Safety by Design principles, OpenAI and its counterparts are ensuring that child safety remains a top priority at every stage of AI development. We have already taken significant steps to minimize the potential for our models to generate harmful content for children, implement age restrictions for ChatGPT, and actively collaborate with the National Center for Missing and Exploited Children (NCMEC), Tech Coalition, and other governmental and industry stakeholders on child protection matters and improvements to reporting mechanisms.
As part of this Safety by Design initiative, we are committed to:
Develop: Develop, construct, and train generative AI models that proactively address child safety risks.
- Responsibly source training datasets, identify and eliminate child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from training data, and report any confirmed CSAM to the appropriate authorities.
- Incorporate feedback loops and iterative stress-testing strategies in our development process.
- Implement solutions to tackle adversarial misuse.
Deploy: Release and distribute generative AI models only after they have been trained and evaluated for child safety, ensuring protections are in place throughout the process.
- Combat and respond to abusive content and behavior, as well as integrate prevention efforts.
- Promote developer accountability in safety by design.
Maintain: Uphold model and platform safety by proactively identifying and responding to child safety risks.
- Dedicated to removing any new AIG-CSAM generated by malicious actors from our platform.
- Invest in research and future technological solutions.
- Combat CSAM, AIG-CSAM, and CSEM on our platforms.
This commitment represents a significant step towards preventing the misuse of AI technologies to produce or disseminate child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As part of the working group, we have also agreed to provide annual progress updates.