The news was featured on MSN.com: “Prominent Irish broadcaster faces trial over alleged sexual misconduct.” At the top of the story was a photo of Dave Fanning.
But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question.
“You wouldn’t believe the amount of people who got in touch,” said Mr. Fanning, who called the error “outrageous.”
The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu.
A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a “prominent Irish broadcaster.” The story was then promoted by MSN, a web portal owned by Microsoft.
The story was deleted from the internet a day later, but the damage to Mr. Fanning’s reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN’s featuring the misleading story with Mr. Fanning’s photo or his defamation case, but the company said it had terminated its licensing agreement with BNN.
During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories. Google News often surfaced them, too.
A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN’s “About Us” page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image.
How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive.
Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
NewsGuard, a company that monitors online misinformation, identified ai-tracking-center/” title=”” rel=”noopener noreferrer” target=”_blank”>more than 800 websites that use A.I. to produce unreliable news content. The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
The quality of the stories examined by NewsGuard is often poor, the company said, and they frequently include false claims about political leaders, celebrity death hoaxes and other fabricated events.
Real Identities, Used by A.I.
“You should be utterly ashamed of yourself,” one person wrote in an email to Kasturi Chakraborty, a journalist based in India whose byline was on BNN’s story with Mr. Fanning’s photo.
Ms. Chakraborty worked for BNN Breaking for six months, with dozens of other journalists, mainly freelancers with limited experience, based in countries like Pakistan, Egypt and Nigeria, where the salary of around $1,000 per month was attractive. They worked remotely, communicating via WhatsApp and on weekly Google Hangouts.
Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
But this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
Mr. Bakir, who now works at a broadcast network called Rudaw, said that he had been skeptical of this approach but that BNN’s founder, a serial entrepreneur named Gurbaksh Chahal, had described it as “a revolution in the journalism industry.”
Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey. A business trend chaser, he created a cryptocurrency (briefly promoted by Paris Hilton) and manufactured Covid tests during the pandemic.
But he also had a criminal past. In 2013, he attacked his girlfriend at the time, and was accused of hitting and kicking her more than 100 times, generating significant media attention because it was recorded by a video camera he had installed in the bedroom of his San Francisco penthouse. The 30-minute recording was deemed inadmissible by a judge, however, because the police had seized it without a warrant. Mr. Chahal pleaded guilty to battery, was sentenced to community service and lost his role as chief executive at RadiumOne, an online marketing company.
After an arrest involving another domestic violence incident with a different partner in 2016, he served six months in jail.
Mr. Chahal, now 41, eventually relocated to Hong Kong, where he started BNN Breaking in 2022. On LinkedIn, he described himself as the founder of ePiphany ai, a large language learning model that he said was superior to ChatGPT; this was the tool that BNN used to generate its stories, according to former employees.
Mr. Chahal claimed he had created ePiphany, but it was so similar to ChatGPT and other A.I. chatbots that employees assumed he had licensed another company’s software.
Mr. Chahal did not respond to multiple requests for comment for this article. One person who did talk to The Times for this article received a threat from Mr. Chahal for doing so.
At first, employees were asked to put articles from other news sites into the tool so that it could paraphrase them, and then to manually “validate” the results by checking them for errors, Mr. Bakir said. A.I.-generated stories that weren’t checked by a person were given a generic byline of BNN Newsroom or BNN Reporter. But eventually, the tool was churning out hundreds, even thousands, of stories a day — far more than the team could “validate.”
Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.
Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
“It tarnished our reputations,” Ms. Chakraborty said.
Mr. Chahal did not seem sympathetic. According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.
“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.
Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
“You are fired,” Mr. Chahal responded, and removed him from the WhatsApp group.
Countless Mistakes
Over the past year, BNN racked up numerous complaints about getting facts wrong, fabricating quotes from experts and x.com/TedNesi/status/1764755510224470484″ title=”” rel=”noopener noreferrer” target=”_blank”>stealing content and photos from other news sites without credit or compensation.
One disinformation researcher reviewed more than 1,000 BNN stories and concluded that a quarter of them had been lifted from five sites, including Reuters, The Associated Press and the BBC. twitter.com/3r1nG/status/1579895292861636609?s=20″ title=”” rel=”noopener noreferrer” target=”_blank”>Another researcher found evidence that BNN had placed its logo on images that it did not own or license.
The Times identified multiple inaccuracies and context-free statements in BNN stories that seemed to extend beyond simple human error. There were sources who were misattributed or absent, descriptions of specific events without references to where or when they occurred and a collage of gun imagery illustrating a story about microwaves. One story, about journalists tackling disinformation at a literature festival, invented a panelist and incorrectly included another.
After BNN suggested that Dungeness crabs, which are from the West Coast, were native to Maryland, an official with the state’s Department of Natural Resources chastised BNN twitter.com/AJwatchMD/status/1743287649471787292?s=20″ title=”” rel=”noopener noreferrer” target=”_blank”>on x, calling on Google to “delist these stupid ai outfits that aggregate news and get things wildly incorrect.”
After a lawyer complained on LinkedIn that a story on BNN had invented quotes from him, BNN removed him from the story. BNN also changed the date on the story to one before the publication date on an opinion column that the lawyer believed was the source of the quote.
The story with the photo of Mr. Fanning, which Ms. Chakraborty said had been generated by A.I. with her name randomly assigned to it, was published because news about the trial of an Irish broadcaster accused of sexual misconduct was trending. The broadcaster wasn’t named in the original article because he had a super injunction — a gag order that forbids news media to name a person in its coverage — so the A.I. presumably paired the text with a generic photo of a “prominent Irish broadcaster.”
Mr. Fanning’s lawyers at Meagher Solicitors, an Irish firm that specializes in defamation cases, reached out to BNN and never received a response, though the story was deleted from BNN’s and MSN’s sites. In January, he filed a defamation case against BNN and Microsoft in the High Court of Ireland. BNN responded by publishing a story that month about Mr. Fanning that accused him of “desperate tactics in money hustling lawsuit.”
This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were x-elonmusk-activity-7107695211702611969-URRf/?trk=public_profile_share_view” title=”” rel=”noopener noreferrer” target=”_blank”>suspended on x.
A Strong Motivator
The appeal of using A.I. for news is clear: money.
The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content, said Sander van der Linden, a social psychology professor and fake-news expert at the University of Cambridge.
Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the ai/” title=”” rel=”noopener noreferrer” target=”_blank”>viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
Many audiences already struggle to discern machine-generated material from reports produced by human journalists, Mr. van der Linden said.
“It’s going to have a negative impact on trusted news,” he said.
Local news outlets say A.I. operations like BNN are leeches: stealing intellectual property by disgorging journalists’ work, then monetizing the theft by gaming search algorithms to raise their profile among advertisers.
“We’re no longer getting any slice of the advertising cake, which used to support our journalism, but are left with a few crumbs,” said Anton van Zyl, the owner of the Limpopo Mirror in South Africa, whose articles, it seemed, had been facebook-killing-local-news/” title=”” rel=”noopener noreferrer” target=”_blank”>rewritten by BNN.
In March, Google rolled out an update to “reduce unoriginal content in search results,” targeting sites with “spammy” content, whether produced by “automation, humans or a combination,” according to a corporate blog post. BNN’s stories stopped showing up in search results soon after.
Before ending its agreement with BNN Breaking, Microsoft had licensed content from the site for MSN.com, as it does with reputable news organizations such as Bloomberg and The Wall Street Journal, republishing their articles and splitting the advertising revenue.
CNN tech/microsoft-ai-news/index.html” title=”” rel=”noopener noreferrer” target=”_blank”>recently reported that Microsoft-hired editors who once curated the articles featured on MSN.com have increasingly been replaced by A.I. Microsoft confirmed that it used a combination of automated systems and human review to curate content on MSN.
BNN stopped publishing stories in early April and deleted its content. Visitors to the site now find BNNGPT, an A.I. chatbot that, when asked, says it was built using open-source models.
But Mr. Chahal wasn’t abandoning the news business. Within a week or so of BNN Breaking’s shutting down, the same operation moved to a new website called TrimFeed.
TrimFeed’s About Us page had the same set of values that BNN Breaking’s had, promising “a media landscape free of distortions.” On Tuesday, after a reporter informed Mr. Chahal that this article would soon be published, TrimFeed shut down as well.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>