northor need for more scare stories about the impending automation of the future. Artists, designers, photographers, authors, actors, and musicians see little sense of humor in jokes about AI programs that will one day do their job for less money. That dark dawn is here, they say.
Vast amounts of imaginative production, work done by people in the kinds of jobs once supposed to be protected from the threat of technology, have already been captured from the web, to be adapted, combined, and anonymized by algorithms for commercial use. . But just as GPT-4, the improved version of the AI generative text engine, was proudly unveiled last week, artists, writers, and regulators have begun to fight back in earnest.
“Image libraries are being scoured for content and huge data sets are being accumulated right now,” says Isabelle Doran, director of the Photographers Association. “So if we want to ensure appreciation of human creativity, we need new ways to track content and smarter law protection.”
Collective campaigns, lawsuits, international norms and cyber-attacks are being deployed at great speed on behalf of the creative industries in an effort, if not to win the battle, then at least to “rage, rage against the dying of the light.” “, in the words of the Welsh poet Dylan Thomas.
Poetry may convincingly remain a tough nut to crack for AI, but among the first to face a real threat to their livelihoods are photographers and designers. Generative software can produce images at the touch of a button, while sites like the popular NightCafe create “original” data-driven works of art in response to a few simple verbal cues. The first line of defense is a growing movement of visual artists and image agencies who are now “opting out” of allowing their work to be cultivated by artificial intelligence software, a process called “data training.” As a result, thousands have posted “No AI” signs on their social media accounts and web galleries.
A software-generated approximation of Nick Cave’s lyrics in particular drew the wrath of the artist at the beginning of this year. She called it “a grotesque mockery of what it is to be human.” It’s not a great review. Meanwhile, AI innovations like Jukebox are also threatening musicians and songwriters.
And digital voice cloning technology is putting live actors and narrators out of regular work. In February, a veteran audiobook narrator from Texas named Gary Furlong noted that Apple had been granted the right to “use audiobook files for machine learning training and models” in one of its contracts. But the SAG-AFTRA union took his case. The agency involved, Findaway Voices, now owned by Spotify, has since agreed to call a temporary suspension and points to a “revocation” clause in their contracts. But this year Apple released its first algorithmically narrated books, a service Google has offered for two years.
The increasing inevitability of this new challenge for artists seems unfair, even for viewers. As award-winning British author Susie Alegre, a recent victim of AI plagiarism, asks: “Do we really need to find other ways of doing things that people enjoy anyway? Things that give us a sense of accomplishment, like writing a poem? Why not replace the things we don’t enjoy doing?”
Alegre, a London-based human rights lawyer and writer, argues that the value of authentic thinking has already been undermined: “If the world is going to put its faith in AI, what’s the point? Pay rates for original work have been greatly reduced. This is an automated stripping of intellectual assets.”
The truth is, AI’s forays into the creative world are just the headlines. After all, it’s fun to read about an award-winning computer-generated song or piece of art. Accounts of software innovation in the field of insurance underwriting are less convincing. Regardless, scientific efforts to simulate imagination have always been at the forefront of the drive for better AI, precisely because it’s so hard to do. Could the software really produce engaging pictures or engaging stories? So far, the answer to both, happily, is “no.” The tone and appropriate emotional register remain hard to fake.
However, the prospect of valid creative careers is at stake. ChatGPT is just one of the latest AI products, along with Google’s Bard and Microsoft’s Bing, to shake up copyright law. Artists and writers who are losing out to AI tend to speak sadly of “garbage-spouting” and “bullshit” programs, and of a sense of “rape.” This moment of creative danger has arrived with the vast amounts of data now available on the web for covert harvesting rather than any malevolent impulse. But his victims are alarmed.
Analysis of the growing problem in February found that the work of designers and illustrators is the most vulnerable. Software programs such as Midjourney, Stable Diffusion and DALL.E 2 create images in seconds, all selected from a database of styles and color palettes. One platform, ArtStation, was reportedly so overwhelmed by anti-AI memes that it requested labeling of AI artwork.
At the Photographers Association, Doran set up a poll to gauge the scale of the attack. “We have clear evidence that the image data sets, which form the basis of these commercial AI generative image content programs, consist of millions of images from public websites taken without permission or payment,” she says. Using the Have I Been Trained site, which has access to the Stable Diffusion dataset, its “shocked” members have identified their own images and are mourning the reduced value of their intellectual property.
The opt-out movement is spreading, with tens of millions of artworks and images banned in recent weeks. But tracking is tricky, as clients use images in altered forms and opt-outs can be hard to find. Many photographers also report that their “style” is being imitated to produce cheaper work. “Since these programs are designed for ‘machine learning,’ at what point can they easily generate the style of an established professional photographer and displace the need for their human creativity?” Doran says.
For Alegre, who last month discovered paragraphs from his award-winning book freedom to think were being offered, uncredited by ChatGPT, there are hidden dangers in simply opting out: “It means you’re completely out of the story, and for a woman that’s problematic.”
AI is already wrongly attributing Alegre’s work to male authors, so removing him from the equation would compound the error. Data banks can only reflect what they have access to.
“ChatGPT said that I did not exist, although it cited my work. In addition to the damage to my ego, I exist on the Internet, so it felt like a violation, ”she says.
“He later came up with a pretty accurate synopsis of my book, but said the author was some random guy. And interestingly, my book is about the way misinformation distorts our view of the world. AI content really is as reliable as checking your horoscope.” He would like to see AI development funds diverted to finding new legal protections.
AI fans may well promise that it can help us better understand the future beyond our intellectual limitations. But for plagiarized artists and writers, it now seems the best hope is that it will teach humans once again that we should doubt and verify everything we see and read.