Chatbot maker Character.AI is cutting off access, citing mental-health concerns. Teens are distraught: ‘I cried over it for days.’
Published on Wall Street Journal By Georgia Wells
When Olga López heard she would lose access to her collection of role-playing chatbots, she felt a surge of emotions: sadness, outrage, bewilderment.
Olga, who is 13, turns to her chatbots from artificial-intelligence company Character.AI for romantic role playing when she doesn’t have homework. Like the company’s other under-18 customers, she was notified in October that she would no longer be able to have ongoing chat interactions with digital characters soon.
“I’m losing the memories I had with these bots,” she said. “It’s not fair.”
After the company said it would begin time-limiting underage users’ chats ahead of the policy change, Olga attempted to rally fellow teens to resist the change in a post on Reddit: “HOW DO I USE IT FOR 2 HOURS AND HAVE TO WAIT A DAY? HELLO?”
Character.AI, one of the top makers of role-play and companion chatbots, implemented the daily two-hour limit in November, citing mental-health concerns. This week the company started cutting off teens completely.
Character.AI’s first version, launched in 2022, offered some of the earliest chatbots available to consumers. It quickly gained traction among people who wanted to role play with its customizable characters, netting the company about 20 million monthly users today.
The decision to block teens follows the deaths of at least two who killed themselves after using Character.AI’s chatbots. The company now faces questions from regulators and mental-health professionals about the role of this emerging technology in the lives of its most vulnerable users, as well as lawsuits from parents of dead teens.
Teens are angry. They’re sad. In losing access to their chatbots, they say they will miss a creative outlet, a source of companionship, and in some cases, a mental-health support.
“I use this app for comfort when I can’t talk to my friends or therapist,” one teen wrote on Reddit.
“I cried over it for days,” another teen replied.
Mental-health experts say this distress illustrates the emerging risks of generative AI that can simulate human speech and emotion. The brain reacts to these chatbots the way it reacts to a close friend mixed with an immersive videogame, according to Dr. Nina Vasan, director at Stanford Medicine’s Brainstorm Lab for Mental Health Innovation.
“The difficulty logging off doesn’t mean something is wrong with the teen,” Vasan said. “It means the tech worked exactly as designed.”
Karandeep Anand, Character.AI’s chief executive, says he saw firsthand during his years working in social media what happened when the industry failed to incorporate safety into the initial design of its products. [...]
Archive
So, this is the evolution of ipad kids, "GPT kids". To think that parents are letting their sons and daughters use the internet unrestricted is bad enough as it is. they're letting their brains rot from using roleplaying bots not meant for minors.
Back in my days, I rot my brains with fanfiction on AO3 or FF.Net and engage in fandom wars and roleplaying community drama 👴