ChatGPT, an OpenAI-powered artificial intelligence chatbot, falsely accused prominent criminal defense attorney and law professor Jonathan Turley for sexual harassment.
The chatbot composed a Washington Post article about a law school trip to Alaska in which Turley was accused of making sexually provocative statements and attempting to grope a student, despite the fact that Turley had never been on such a trip.
Turley’s reputation took a huge hit after these damaging claims quickly went viral on social media.
“It came as a surprise to me, as I had never been to Alaska with students, The Post had never published such an article, and no one ever accused me of sexual harassment or assault,” she said.
After receiving an email from a fellow law professor who had used ChatGPT to investigate cases of sexual harassment by American law school academics, Turley learned of the charges.
Professor Jonthan Turley was falsely accused of sexual harassment by AI-powered ChatGPT. Image: Getty Images
The need for caution when using AI-generated data
on his blogThe George Washington University professor said:
“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence is ‘dangerous.’ I think differently…”
Concerns about the reliability of ChatGPT and the likelihood of future instances like the one Turley experienced have been raised as a result of his experience. The chatbot is powered by Microsoft, which the company says has rolled out updates to improve accuracy.
Is ChatGPT awesome?
When AI produces results that are unexpected, incorrect, and not supported by real-world evidence, it is said to be “hallucinating.”
False content, news, or information about people, events, or facts can result from these hallucinations. Cases like Turley’s show the far-reaching effects of spreading AI-generated falsehoods in the media and on social media.
ChatGPT’s developers, OpenAI, have recognized the need to educate the public about the limitations of AI tools and lessen the possibility of users experiencing such hallucinations.
The company’s attempts to make its chatbot more accurate are appreciated, but more work is needed to ensure that this sort of thing doesn’t happen again.
The incident has also drawn attention to the value of the ethical use of AI and the need for a deeper understanding of its limitations.
Human supervision required
Although AI has the potential to greatly improve many aspects of our lives, it is still not perfect and must be monitored by humans to ensure accuracy and reliability.
As artificial intelligence becomes more and more integrated into our daily lives, it is crucial that we exercise caution and responsibility when employing such technologies.
Turley’s encounter with ChatGPT highlights the importance of caution when dealing with AI-generated inconsistencies and fallacies.
It is essential that we ensure that this technology is used ethically and responsibly, with an awareness of its strengths and weaknesses, as it continues to transform our environment.
Crypto total market cap holding steady at the $1.13 trillion level on the weekend chart at TradingView.com
Meanwhile, according to Microsoft’s senior director of communications, Katy Asher, the company has since taken steps to ensure the accuracy of its platform.
Turley wrote in response on his blog:
“AI can smear you and these corporations will just shrug and say they’re trying to be honest.”
Jake Moore, ESET Global Cyber Security Advisor, warned ChatGPT users not to take everything with a hook, line and sinker to prevent the harmful spread of misinformation.
-Bizsiziz Featured Image