We recognize that generating discourse that resembles the voices of the people carries serious risks, which are especially important in an election year. We're collaborating with U.S. and international partners across government, media, entertainment, education, civil society, and more to ensure we incorporate their feedback as we build.
Partners testing Voice Engine today have agreed to our usage policies, which prohibit impersonation of another individual or organization without consent or legal right. Additionally, our terms with these partners require the explicit, informed consent of the original speaker, and we do not allow developers to create ways for individual users to create their own voices. Partners must also clearly disclose to their audience that the voices they hear are generated by ai. Finally, we have implemented a set of security measures, including watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of how it is used.
We believe that any broad deployment of synthetic voice technology should be accompanied by voice authentication experiences that verify that the original speaker is knowingly adding their voice to the service and a banned voice list that detects and prevents the creation of voices that are too similar. . to prominent figures.