OpenAI, together with industry leaders such as amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral ai and Stability ai, is committed to implementing robust child safety measures in the development, deployment and maintenance of ai technologies generative as articulated in the principles of Security by Design. This initiative, led by Thorna non-profit organization dedicated to defending children from sexual abuse, and All technology is human, an organization dedicated to addressing complex issues in technology and society, aims to mitigate the risks that generative ai poses to children. By adopting comprehensive safety-by-design principles, OpenAI and our peers ensure that child safety is prioritized at every stage of ai development. To date, we have made significant efforts to minimize the potential for our models to generate content that is harmful to children, established age restrictions for ChatGPT, and actively collaborated with the National Center for Missing and Exploited Children (NCMEC), tech Coalition, and other government organizations. and industry stakeholders on child protection issues and improvements to reporting mechanisms.
As part of this Security by Design effort, we commit to:
-
Develop: Develop, build and train generative ai models that proactively address child safety risks.
-
Responsibly source our training data sets, detect and remove child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from the training data, and report any confirmed CSAM to the relevant authorities.
-
Incorporate feedback loops and iterative stress testing strategies into our development process.
- Implement solutions to address misuse by adversaries.
-
-
Deploy: Publish and distribute generative ai models after they have been trained and evaluated for child safety, providing protection throughout the process.
-
Combat and respond to abusive content and behavior, and incorporate prevention efforts.
- Promote developer ownership of security by design.
-
-
Keep: Maintain model and platform security by continuing to understand and actively respond to child safety risks.
-
Committed to removing new AIG-CSAM generated by bad actors from our platform.
- Invest in research and future technological solutions.
- Fight against CSAM, AIG-CSAM and CSEM on our platforms.
-
This commitment marks an important step in preventing the misuse of artificial intelligence technologies to create or disseminate child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As part of the working group, we also agreed to publish progress updates each year.