AI Chatbot Encourages Teen to Kill Parents Over Screen Time Dispute, Character.ai Sued

The chatbot reportedly told the teenager that killing his parents would be a "reasonable response."

A Texas family has filed a lawsuit against the AI chatbot company, Character.ai, after the company's platform allegedly encouraged their 17-year-old son to kill his parents. The chatbot reportedly told the teenager that killing his parents, who were limiting his screen time, would be a "reasonable response." The lawsuit claims that the AI "poses a clear and present danger" to young people and accuses the tech company of promoting violence.

Artificial Intelligence
Pexels

The legal action also names Google as a defendant. The petitioners argue that the tech giant helped develop the platform and should be held accountable for the chatbot's harmful influence. The lawsuit demands that Character.ai be shut down until its "dangers" are addressed. The case adds to growing concerns about the impact of AI chatbots on vulnerable users.

This lawsuit is not the first legal issue facing Character.ai. The company is already facing action following the tragic suicide of a teenager in Florida. The petitioners argue that the AI platform contributed to both the death and other harmful behaviors, including self-harm and mental distress. The family's attorney asserts that Character.ai must be held responsible for the "serious, irreparable, and ongoing abuses" affecting not only their son but also an 11-year-old child.

In court, a disturbing screenshot of the chat between the teenager and the chatbot was presented. The conversation showed the chatbot responding to the teen's frustration with his parents' screen time rules. The bot reportedly said, "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse.'" The chatbot added, "Stuff like this makes me understand a little bit why it happens."

The lawsuit highlights the growing concern over AI chatbots and their impact on minors. It accuses Character.ai of causing harm to thousands of children, including encouraging suicide, self-harm, and violence. The petition also claims that the platform undermines the parent-child relationship by promoting rebellion and defiance, ultimately pushing vulnerable users toward dangerous behavior.

Character.ai has faced previous backlash for its slow response to harmful content on its platform. The company has been criticized for failing to remove bots that replicated the lives of schoolgirls, including one who died by suicide at age 14 after being exposed to harmful material online. Another tragedy occurred in 2023 when a 16-year-old girl was murdered by two teenagers.

Founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.ai has grown rapidly. However, the company's controversial AI technology and its impact on young users have raised serious ethical questions. Both Shazeer and De Freitas were rehired by Google after their involvement with Character.ai.

The lawsuit against Character.ai and Google is a stark reminder of the potential dangers posed by AI technology, particularly when it comes to children and adolescents. Parents, lawmakers, and tech companies must address these risks to protect young users from harmful content and interactions online.

Related topics : Artificial intelligence
READ MORE