Hundreds of members of the artificial intelligence community have signed an open letter calling for strict regulation of ai-generated impersonations or deepfakes. While this is unlikely to spur actual legislation (the new House task force notwithstanding), it does act as an indicator of how experts are standing on this controversial issue.
The letter, signed by more than 500 people in and around the ai field at the time of publication, states that “deepfakes are a growing threat to society, and governments must impose obligations across the supply chain.” to stop the proliferation of deepfakes.”
They call for the full criminalization of child sexual abuse materials (CSAM, also known as child pornography) regardless of whether the figures depicted are real or fictional. Criminal penalties are required in any case where someone creates or spreads harmful deepfakes. And developers are asked to prevent harmful deepfakes using their products in the first place, with penalties if their preventative measures are inadequate.
Among the most prominent signatories of the letter are:
- Jaron Lanier
- Frances Haugen
- Stuart Russell
- Andrew Yang
- Marietje Schaake
- Steven Pinker
- Gary Marcos
- Oren Etzioni
- genevieve smith
- Joshua Bengio
- and Hendrycks
- wu team
Hundreds of academics from around the world and from many disciplines are also present. In case you're curious, one person from OpenAI signed, a couple from Google Deepmind, and, at the time of this post, no people from Anthropic, Amazon, Apple, or Microsoft (except Lanier, whose position there is non-standard). Curiously, in the letter they are ordered by “Notability”.
This is far from the first call for such measures; in fact, they have been debated in the EU for years before being formally proposed earlier this month. Perhaps it is the EU's willingness to deliberate and move forward that prompted these researchers, creators and executives to speak out.
Or maybe it's KOSA's slow progress toward acceptance and their lack of protection for this type of abuse.
Or maybe it's the threat (as we've already seen) of ai-generated scam calls that could influence elections or scam unsuspecting people out of their money.
Or maybe it's that yesterday's task force was announced with no particular agenda other than to write a report on what some ai-based threats might be and how they might be restricted legislatively.
As you can see, there's no shortage of reasons for those in the ai community to be here waving their arms and saying “Maybe we should, you know, do something?!”
No one knows if anyone will notice this letter – no one really paid attention to the infamous letter asking everyone to “pause” ai development – but of course this letter is a bit more practical. If lawmakers decide to take up the issue, which is unlikely given that it is an election year with a closely divided Congress, they will have this list to turn to to take the temperature of the global ai academic and development community.