Klymkowsky, a professor at the University of Colorado Boulder, and Riedel, a professor at Front Range Community College in Colorado, are conducting a pilot study of two artificial intelligence tools. The first is “Rita,” a chatbot tutor designed to engage in Socratic dialogue with introductory biology students to increase inclusivity. The second tool is Rita's analysis robot, “Dewey,” which is designed to help instructors evaluate student work, not for grading but so that the teacher can identify student misconceptions and questions and adjust. your instruction accordingly.
“It's really two functions,” Klymkowsky says of the tools. “One to help the instructor and free him from the idea that he has to fill it all in without knowing if the students really understand it. And it is helpful for the student to have a Socratic responder to make him think about what he is talking about.”
However, for any of these chatbots to work as designed, avoiding hallucinations is key. Here's how Klymokowsky and Riedel are working to overcome ai hallucinations.
<h2 id="ai -tutors-that-don-t-hallucinate-accurate-inputs”>ai Tutors That Don't Hallucinate: Accurate Inputs
One of the reasons ai models freak out is because they're trained on vast amounts of data from the Internet, and as anyone who's spent time on social media or Reddit knows, the Internet isn't always a reliable narrator about, well, anything .
To overcome this, Rita is trained exclusively on verified and accurate content. “It's limited by the materials we use,” Klymokowsky says. These include biology textbooks and peer-reviewed articles that Klymokowsky co-authored.
To train an ai chatbot on this more limited and specific data set, Klymokowsky is working with ai /” target=”_blank” data-url=”http://customgpt.ai /” referrerpolicy=”no-referrer-when-downgrade” data-hl-processed=”none”>CustomGPT.ai . Klymokowsky and Riedel received a grant from the tech company for its ai tutor pilot program that gives them premium access to the tool and other support.
CustomGPT.ai uses GPT-4 technology but is dedicated to eliminating ai hallucinations from its users. Alden Do Rosario, CEO and founder of the company, says that previously in technology there was a saying that companies should prioritize security or privacy; However, he says ai platforms “must first be anti-hallucination.”
Allowing users to train ai models with specific data is the first important step in preventing ai hallucinations, Rosario says, but there is another equally important factor that can too often be overlooked in technology design: human input.
“Unlike most software (problems), hallucinations are one of those things where the human element is required,” Rosario says. “You can't just put an engineer in a dark room and say, 'Hey, go solve hallucinations.'”
Instead, human experts should help inform engineers when ai tools are freaking out. Rosario says that when users of his platform report hallucinations to the development team, they study the root cause of the hallucinations and improve the artificial intelligence tool to help prevent that type of error from occurring in all users. He compares this process to when a car company learns of a defect and issues a recall to owners.
Rosario adds that it's part of the development process that isn't happening as much as it should with ai . “It's pretty ignored because even big companies like OpenAI think, 'We'll send an engineer and they'll figure it out,'” he says.
Beyond precision
So far, these two factors have resulted in chatbots that Klymkowsky feels comfortable implementing with students and teachers. While his two chatbots are still being studied and tested, Klymokowsky hasn't had any hallucinations. However, he acknowledges that creating a scientific chatbot with ai could be less challenging than in other academic fields where debate and nuance are more common.
“It's easier in the sciences because there are things we know and they're not ambiguous,” he says.
Additionally, there are challenges beyond accuracy to creating effective ai tutors. For example, the Rita chatbot is designed to engage students in conversation rather than simply giving them answers. “If the student makes an incorrect assumption or leaves something out, she's saying, 'Have you thought about this?' or 'Does this idea change how you would respond?'” Klymkowsky says.
Making sure these conversations are natural and engaging for students is an ongoing process, Klymkowsky says. “You don't want the robot to lecture them,” he says. “In fact, that's probably the biggest challenge: stopping the robot from lecturing them.”
Solving that problem is important but less vital since most educators can live with students using a boring tutor. What we are worried about is an inaccurate tutor.