Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio's living room turn around and yes, I immediately thought “if this stupid video is so good, imagine how bad the election misinformation will be.” OpenAI, out of necessity, has been thinking about the same thing and today updated its policies to start addressing the issue.
He Wall Street Journal tech/ai/openai-bans-use-of-ai-tools-for-campaigning-voter-suppression-2308fb98″>took note of the new policy change what were they first published on the OpenAI blog. ChatGPT, Dall-e and other OpenAI tool users and creators are now prohibited from using OpenAI tools to impersonate candidates or local governments, and users are also prohibited from using OpenAI tools for campaigning or lobbying. Users also cannot use OpenAI tools to discourage voting or misrepresent the voting process.
The digital credential system would encode images with their provenance, making it much easier to identify artificially generated images without having to look for strange hands or exceptional attacks.
OpenAI tools will also begin directing voting questions in the United States to CanIVote.orgwho tends to be one of the best authorities on the Internet about where and how to vote in the US.
But all of these tools are currently in the process of being implemented and rely heavily on users reporting bad actors. Given that ai is itself a rapidly changing tool that regularly surprises us with wondrous poetry and outright lies, it is unclear how well it will work to combat disinformation in the election season. For now, the best we can do is continue to embrace media literacy. That means questioning every news story or image that seems too good to be true and at least doing a quick Google search if your ChatGPT turns up something completely crazy.