Controversial chatbot's safety measures 'a sticking plaster'
Controversial chatbot's safety measures 'a sticking plaster'
Getty Images
Chatbot platform Character.ai is overhauling the way it works for teenagers, promising it will become a "safe" space with added controls for parents. The site is facing two lawsuits in the US - one over the death of a teenager - and has been branded a "clear and present danger" to young people. It says safety will now be "infused" in all it does through new features which will tell parents how their child is using the platform - including how much time they're spending talking to chatbots and the ones they speak to the most. The platform - which allows users to create digital personalities they can interact with - will get its "first iteration" of parental controls by the end of March 2025. But Andy Burrows, head of the Molly Rose Foundation, called the announcement "a belated, reactive and completely unsatisfactory response" which he said "seems like a sticking plaster fix to their fundamental safety issues".
"It will be an early test for Ofcom to get to grips with platforms like Character.ai and to take action against their persistent failure to tackle completely avoidable harm," he said. Character.ai was criticised in October when chatbot versions of the teenagers Molly Russell and Brianna Ghey were found on the platform. And the new safety features come as it faces legal action in the US over concerns about how it has handled child safety in the past, with one family claiming a chatbot told a 17-year-old that murdering his parents was a "reasonable response" to them limiting his screen time.
Disclaimers and notifications