A group of well-known AI ethicists have written a counterpoint to this week’s controversial letter calling for a six-month “pause” in AI development, criticizing it for focusing on hypothetical future threats when actual harm is attributable to the AI. misuse of technology today. .
Thousands of people, including such household names as Steve Wozniak and Elon Musk, signed on to the Future of Life institute’s open letter earlier this week, proposing that development of AI models like GPT-4 should be put on hold to avoid “loss.” of control of our civilization”, among other threats.
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell are major figures in the fields of AI and ethics, known (in addition to their work) for having been ousted from Google for a paper criticizing the capabilities of the AI. They are currently working together at the DAIR Institute, a new research team aimed at studying, exposing, and preventing the harms associated with AI.
But they were not on the list of signatories, and now they have published a reproach decrying the letter’s failure to address existing problems caused by the technology.
“Those hypothetical risks are the focus of a dangerous ideology called long-termism that ignores the real harms that result from the deployment of AI systems today,” they wrote, citing worker exploitation, data theft, the synthetic media that underpin the existing power structures and increased concentration. of those power structures in fewer hands.
Choosing to worry about a Terminator or Matrix-like robotic apocalypse is a red herring when we have, at the same time, reports from companies like Clearview AI. being used by police to essentially frame an innocent man. No need for a T-1000 when you have Ring cameras at every front door accessible through rubber stamp warranty factories online.
While the DAIR team agrees with some of the goals in the letter, such as identifying synthetic media, they emphasize that action must be taken now, on today’s problems, with the remedies available to us:
What we need is a regulation that enforces transparency. Not only should it always be clear when we encounter synthetic media, but organizations building these systems should also be required to document and disclose training data and model architectures. The onus for creating tools that are safe to use should rest with the companies that build and implement generative systems, which means that the developers of these systems should be held accountable for the results produced by their products.
Today’s race to bigger and bigger “AI experiments” is not a predetermined path where our only choice is how fast to run, but rather a set of profit-driven decisions. The actions and choices of corporations must be determined by a regulation that protects the rights and interests of the people.
Indeed, it is time to act: but the focus of our concern should not be imaginary “powerful digital minds.” Instead, we should focus on the very real and very present exploitative practices of the companies that claim to build them, which are rapidly centralizing power and increasing social inequities.
By the way, this letter echoes a sentiment I heard from Uncharted Power founder Jessica Matthews at yesterday’s AfroTech event in Seattle: “You shouldn’t be afraid of AI. You should be afraid of the people who build it.” (His solution: become the people who build it.)
While it is extremely unlikely that any major company would agree to halt its research efforts in accordance with the open letter, it is clear from the commitment it received that the risks (real and hypothetical) of AI are of great concern in many industry segments. society. But if they don’t, maybe someone else has to do it for them.