Today we are implementing a new technique for DALL·E to generate images of people that more accurately reflect the diversity of the world’s population. This technique is applied at the system level when DALL·E receives a notice that describes a person who does not specify race or gender, such as “firefighter”.
According to our internal evaluation, users were 12 times more likely to say that DALL·E images included people of diverse backgrounds after applying the technique. We plan to improve this technique over time as we collect more data and feedback.
Trigger
In April, we began previewing the DALL E 2 research for a limited number of people, which has allowed us to better understand the capabilities and limitations of the system and improve our security systems.
During this preview phase, early adopters have flagged sensitive and biased images that have helped inform and test this new mitigation.
We are continuing to investigate how AI systems like DALL E might reflect bias in your training data and the different ways we can address it.
During the investigation preview, we have taken other steps to improve our security systems, including:
- Minimize the risk of DALL E being misused to create misleading content by rejecting image uploads that contain realistic faces and attempts to create the likeness of public figures, including celebrities and prominent political figures.
- Making our content filters more precise so they are more effective at blocking ads and image uploads that violate our content policy while still allowing for creative expression.
- Refine automated and human monitoring systems to protect against misuse.
These improvements have helped us gain confidence in the ability to invite more users to experience DALL·E.
Expanding access is an important part of our responsible deployment of AI systems because it allows us to learn more about real-world usage and continue to iterate on our security systems.