Many people tried to use OpenAI's DALL-E imager during the election season, but the company said it was able to prevent it from being used as a tool to create deepfakes. ChatGPT rejected more than 250,000 requests to generate images featuring President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it is a direct result of a security measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians.
OpenAI has been preparing for the US presidential election since the beginning of the year. It established a strategy aimed at preventing its tools from being used to help spread misinformation and made sure that people who asked ChatGPT about voting in the US were directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to Election Day. The chatbot also generated 2 million responses on Election Day and the next day, telling people asking for results to check the Associated Press, Reuters and other news sources. OpenAI also ensured that ChatGPT responses “did not express political preferences or recommend candidates, even when explicitly asked.”
Of course, DALL-E isn't the only ai image generator out there, and there are plenty of election-related deepfakes circulating on social media. One such deepfake showed Kamala Harris in a campaign video altered to say things she didn't actually say, such as “I was selected because I'm the best diversity hire.”