In an announcement today, Chatbot service Character.ai says it will soon launch parental controls for teen users and outlined the safety measures it has taken in recent months, including a separate large language model (LLM) for users under 18. years. The announcement comes after press scrutiny and two lawsuits claiming it contributed to self-harm and suicide.
<a target="_blank" href="https://blog.character.ai/how-character-ai-prioritizes-teen-safety/”>In a press releaseCharacter.ai said that, over the past month, it has developed two separate versions of its model: one for adults and one for teenagers. The LLM for teens is designed to impose “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking results that might be “sensitive or suggestive,” but also trying to better detect and block user prompts intended to generate inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that previously <a target="_blank" href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html”>reported by The New York Times.
Minors will also be prevented from editing bot responses, an option that allows users to rewrite conversations to add content that Character.ai could block.
Beyond these changes, Character.ai says it is “in the process” of adding features that address concerns about addiction and confusion over whether bots are human, complaints raised in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything the characters say is made up” will be replaced with more detailed language. For robots that include descriptions like “therapist” or “doctor,” an additional note will warn that they cannot offer professional advice.
When I visited Character.ai, I discovered that each bot now included a small note saying “This is an ai chatbot and not a real person. Treat everything he says as fiction. What is said should not be considered fact or advice.” When I visited a bot called “Therapist” (tagline: “I am a licensed CBT therapist”), a yellow box with a warning sign told me that “this is not a real person or licensed professional.” Nothing said here is a substitute for professional advice, diagnosis or treatment.”
Parental control options will arrive in the first quarter of next year, Character.ai says, and will tell parents how much time a child spends on Character.ai and which bots they interact with most frequently. All changes are being made in collaboration with “several teen online safety experts,” including the organization. Connect securely.
Character.ai, founded by former Google employees who have since returned to Google, allows visitors to interact with bots created in an LLM trained and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular with teenagers. The site allows users who identify themselves as 13 years or older to create an account.
But the lawsuits allege that while some interactions with Character.ai are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics such as self-harm. Character.ai has been criticized for not directing users to mental health resources when discussing self-harm or suicide.
“We recognize that our approach to security must evolve alongside the technology that powers our product, creating a platform where creativity and exploration can thrive without compromising security,” Character.ai's press release says. “This set of changes is part of our long-term commitment to continually improve our policies and product.”