The robots would come for the jobs of the humans. That was guaranteed. In general, they were supposed to take over the manual labor, lifting heavy pallets in a warehouse and sorting recycling. Now significant advances in generative artificial intelligence mean robots are coming for artists too. AI-generated images, created with simple text prompts, are winning art contests, adorning Book coversand promoting “The Nutcracker” leaving the human artists to worry about their future.
The threat can feel very personal. An image generator called Stable Diffusion was trained to recognize patterns, styles, and relationships by analyzing billions of images collected from the public Internet, along with text describing their content. Among the images he trained on were works by Greg Rutkowski, a Polish artist who specializes in fantasy scenes featuring dragons and magical beings. Seeing Mr. Rutkowski’s work alongside his name allowed the tool to learn his style so effectively that when Stable Diffusion was released to the public last year, his name became shorthand for users who wanted to generate dreamy and fanciful images.
One artist noticed that the whimsical AI selfies that came out of the viral app Lensa had ghostly signatures in them, mimicking what the AI had learned from the data it was trained on: artists who make portraits sign their work. “These databases were built without any consent, any permission from the artists,” Rutkowski said. Since the generators came out, Rutkowski said he’s gotten far fewer requests from first-time authors needing covers for his fantasy novels. Meanwhile, Stability AI, the company behind Stable Diffusion, recently raised $101 million from investors and is now valued at over $1 billion.
“Artists are afraid to publish new art,” said computer science professor Ben Zhao. Putting art online is the way many artists advertise their services, but now they are “afraid to feed this monster that looks more and more like them,” said Professor Zhao. “Close your business model.”
That led Professor Zhao and a team of computer researchers from the University of Chicago to design a tool called Glaze which aims to prevent AI models from learning the style of a particular artist. To design the tool, what do you plan to do? available for downloadThe researchers surveyed more than 1,100 artists and worked closely with Karla Ortiz, an illustrator and artist living in San Francisco.
Let’s say, for example, that Ms. Ortiz wants to post a new job online, but she doesn’t want AI to steal it. She can upload a digital version of her work to Glaze and choose a different type of art than hers, say abstract. The tool then makes changes to Ms. Ortiz’s art at the pixel level that Stable Diffusion would associate, for example, with Jackson Pollock’s splattered paint blobs. To the human eye, Glazed’s image still looks like her work, but the computer-learned model would capture something very different. It’s similar to a tool the University of Chicago team previously created to protect photos from facial recognition systems.
When Ms. Ortiz posted her Glazed work online, an imager trained on those images would not be able to imitate her work. Instead, an ad with her name would lead to images in a hybrid style of her and Pollock’s works.
“We will withdraw our consent,” Ms. Ortiz said. AI generation tools, many of which charge users a fee to generate images, “have data that doesn’t belong to them,” she said. “That data is my work of art, that is my life. It feels like my identity.”
The University of Chicago team admitted that their tool does not guarantee protection and could lead to countermeasures by anyone committed to emulating a particular artist. “We are pragmatists,” Professor Zhao said. “We recognize the likely delay before laws, regulations and policies catch up. This is to fill that void.”
Many legal experts compare the debate over unlimited use of artists’ work for generative AI to piracy concerns in the early days of the internet with services like Napster allowing people to consume music without paying for it. Generative AI companies are already facing a similar barrage of court challenges. Last month, Ms. Ortiz and two other artists filed a class action lawsuit in California against companies with art generation services, including Stability AI, claiming violations of copyright and right of publicity.
“The allegations in this lawsuit represent a misunderstanding of how generative AI technology works and the law surrounding copyright,” the company said in a statement. Stability AI was also sued by Getty Images for copying millions of photos without a license. “We are reviewing the documents and will respond accordingly,” a company spokeswoman said.
Jeanne Fromer, a professor of intellectual property law at New York University, said companies may have a strong fair use argument. “How do human artists learn to create art?” said Professor Fromer. “They often copy things and consume a lot of existing art and learn patterns and pieces of the style and then create new art. And so, at a certain level of abstraction, you could say that machines are learning to make art in the same way.”
At the same time, Professor Fromer said, the aim of copyright law is to protect and encourage human creativity. “If we care about protecting a profession,“he said, “either we believe that only the creation of art is important to who we are as a society, we may want to protect artists.”
A non-profit organization called the Concept Art Association recently raised over $200,000 through GoFundMe hire a lobbying firm to try to persuade Congress to protect artists’ intellectual property. “We are up against tech giants with unlimited budgets, but we are confident that Congress will recognize that protecting IP is the right side of the argument,” said the association’s founders, Nicole Hendrix and Rachel Meinerding.
Raymond Ku, a professor of copyright law at Case Western University, predicted that creators of art, rather than just taking art from the Internet, would eventually develop some kind of “private contractual system that guarantees some degree of compensation to the creator.” “. In other words, artists can get paid a nominal amount when their art is used to train AI and inspire new imagery, similar to how music streaming companies pay musicians.
Andy Baio, writer and technologist who examined the training data used by Stable Diffusion, said these services can mimic an artist’s style because they see the artist’s name alongside their work over and over again. “It could go and remove names from a data set,” Baio said, to prevent the AI from explicitly learning an artist’s style.
One service already seems to have done something along these lines. When Stability AI released a new version of Stable Diffusion in November, had one notable change: the “Greg Rutkowsi” message no longer worked for images in his style, a development noted by the company’s CEO, Emad Mostaque.
Stable Diffusion fans were disappointed. “What did you do to Greg?” one wrote on an official Discord forum frequented by Mr. Mostaque. He assured forum users that they could customize the model. “Training Greg won’t be too hard,” another person replied.
Mr. Rutkowski said that he planned to start his work with Glazing.