Taylor Swift's taste for Le Creuset is real: her collection of kitchen utensils has appeared on a Tumblr account dedicated to the pop star's home decor, evident in the wedding gifts he gave to a follower and in a Netflix documentary that was featured on Le Creuset's Facebook page.
What is not real is Swift's alleged sponsorship of the company's products, which in recent weeks have appeared in advertisements published on Facebook and other media in which her face and voice appear.
The ads are one of many celebrity-focused scams that artificial intelligence has made more convincing. In a single week in October, the actor Tom Hanksthe journalist King Gayle and YouTube celebrity ai-deep-fakes-scams” title=”” rel=”noopener noreferrer” target=”_blank”>Mrbeast They said ai versions of themselves had been used, without permission, to promote suspicious dental plans, iPhone giveaway deals and other ads.
According to experts, in Swift's case, artificial intelligence technology helped create a synthetic version of the singer's voice, which was combined with images of her and videos of Le Creuset pots. In several ads, Swift's cloned voice addressed “swifties,” her followers, and she said that she was “delighted” to give away kitchen utensils. All she had to do to receive the utensils was press a button and answer a few questions before the end of the day.
Le Creuset said it did not collaborate with the singer in any giveaway. The company urged buyers to check their official online accounts before clicking on suspicious ads. Representatives for Swift, who was named Time magazine's person of the year in 2023, did not respond to requests for comment.
Celebrities have lent their fame to advertisers for as long as advertising has existed. Sometimes, unintentionally. More than three decades ago, Tom Waits sued Frito-Lay—and won nearly $2.5 million—after the potato chip company imitated the singer's voice in a radio ad without his permission. The scam campaign with Le Creuset also included fake versions of Martha Stewart and Oprah Winfrey, who in 2022 published a video in which she expressed her annoyance at the prevalence of false ads on social media, emails, and websites falsely claiming that she endorsed weight loss gummies.
In the last year, major advances in artificial intelligence have made it much easier to produce an unauthorized digital replica of a real person. Audio deepfakes have been especially easy to produce and difficult to identify, according to Siwei Lyu, a computer science professor who directs the Media Forensics Laboratory at the University at Buffalo in New York.
The Le Creuset scam campaign was likely created with a text-to-speech service, Lyu explained. These tools typically translate a script into an ai-generated voice, which can then be incorporated into existing video sequences using lip-syncing programs.
“Nowadays, these tools are very accessible,” Lyu said, adding that it is possible to make a “decent quality video” in less than 45 minutes. “It is becoming very easy and that is why we are seeing more.”
Dozens of various Le Creuset scam ads similar to those featuring Swift — many of them published this month — were visible since late last week in Meta's public selection of ads. (The company owns Instagram and Facebook.) The campaign was also published on TikTok.
The ads directed users to websites that imitated legitimate media, such as the Food Network, showing fake news about Le Creuset's offering along with testimonials from fictitious customers. Participants were asked to pay a “small shipping fee of $9.96” for the kitchen implements. Those who complied faced undeclared monthly charges without receiving the promised kitchen utensils.
Some of the fake Le Creuset ads, like one that imitated interior designer Joanna Gaines, had a deceptive sheen of legitimacy on social media thanks to labels that identified them as sponsored posts or coming from verified accounts.
In April, the Better Business Bureau ai–technology” title=”” rel=”noopener noreferrer” target=”_blank”>warned to consumers that ai-powered fake celebrity scams were “more convincing than ever.” Victims often found themselves with higher-than-expected charges and no trace of the product they had ordered. Bankers have also reported attempts by fraudsters to use deepfake voice recordings, or synthetic replicas of the voices of real people, to commit financial fraud.
In the last year, several well-known figures have had to publicly distance themselves from advertisements in which his image or voice appeared manipulated by artificial intelligence.
This summer, fake ads spread online featuring country singer Luke Combs promoting weight loss gummies that his colleague Lainey Wilson had recommended to him. Wilson published a video on Instagram denouncing the ads, in which he said that “people will do anything to earn a dollar, even if it's a lie.” Combs' manager, Chris Kappy, also posted a video on Instagram denying his involvement in the gummies campaign and accusing foreign companies of using artificial intelligence to replicate Combs' likeness.
“For the managers who see me, artificial intelligence is a scary thing and they are using it against us,” he wrote.
A TikTok spokesperson said the app ad policy requires advertisers to obtain consent for “any synthetic media containing a public figure” and added that the rules of the TikTok community require creators to disclose when there is “synthetic or manipulated media depicting realistic scenes.”
Meta said it took action on ads that violated its policieswhich prohibit content that uses public figures in a deceptive manner to try to scam users. The company said it had taken legal action against some perpetrators of such scams, but added that malicious ads often evaded Meta's review systems by camouflaging their content.
Since there are no federal laws addressing ai scams, lawmakers have proposed laws to limit their damage. Two bills introduced in Congress last year — the House Accountability Act and the Senate Anti-Falsehood Act — would require safeguards such as content labels or permission to use someone's voice or likeness. .
At least nine states, including California, Virginia, Florida and Hawaii, have laws that regulate content generated by artificial intelligence.
For now, Swift will likely continue to be the subject of experiments with that technology. Synthetic versions of her voice regularly appear on TikTok, performing songs she never sang, slamming critics, and serving as ringtones. An interview in English that she gave in 2021 to the program Late Night with Seth Meyers was dubbed with an artificial interpretation of her Mandarin voice. One website charges up to $20 for personalized voice messages from the “ai clone of Taylor Swift,” promising “that the voice you hear is indistinguishable from the real thing.”