As always, you are in control of your data with ChatGPT. Your GPT chats are not shared with builders. If a GPT uses third-party APIs, you choose whether data can be sent to that API. When builders customize their own GPT with actions or insights, the builder can choose whether user chats with that GPT can be used to improve and train our models. These options are based on the existing privacy controls that users have, including the option to exclude their entire account from model training.
We have set up new systems to help review GPTs according to our usage policies. These systems are in addition to our existing mitigations and aim to prevent users from sharing harmful GPTs, including those that involve fraudulent activities, hateful content, or adult themes. We’ve also taken steps to build user trust by allowing builders to verify your identity. We will continue to monitor and learn how people use GPTs and will update and strengthen our security mitigations. If you have concerns with a specific GPT, you can also use our reporting feature on the GPT shared page to notify our team.
GPTs will continue to become more useful and intelligent, and over time you will be able to allow them to perform real tasks in the real world. In the field of ai, these systems are often called “agents.” We believe it is important to gradually move towards this future, as it will require careful technical and security work, and time for society to adapt. We’ve been thinking deeply about the social implications and will have more analysis to share soon.