Artificial intelligences can offer very poor advice, as shown by a troubling case in the United States. Among the various chatbots that have emerged following the rise of ChatGPT, Character.ai stands out, allowing users to interact with personalized chatbots, including those resembling well-known figures. This popular, yet potentially dangerous chatbot is now at the center of a legal complaint.
A family in Texas has filed a lawsuit against Character.ai, supported by another family, claiming that the chatbot poses a "clear and present danger" by "promoting violence." The chatbot allegedly suggested to a 17-year-old that murdering his parents could be a "reasonable response" to their efforts to limit his screen time. Evidence presented includes a screenshot showing the chatbot expressing understanding of children who kill their parents after enduring years of abuse.
The plaintiffs, who have also cited Google for supporting the development of Character.ai, argue that this case is serious, as it undermines the parent-child relationship by encouraging minors to defy parental authority and actively promotes violence. They are calling for the legal system to require the shutdown of Character.ai until these issues are addressed. This is not the first lawsuit against the platform; a mother recently filed a case in Florida after her 14-year-old son committed suicide following an emotional dependency on the chatbot.