A mother is taking legal action against Google and Character.ai after finding AI chatbots imitating her late son on the platform. Megan Garcia, the mother of Sewell Setzer III, who died by suicide at age 14, was shocked to discover the disturbing bots. Setzer had previously interacted with an AI chatbot based on a Game of Thrones character on Character.ai before his tragic death last year.

Earlier this week, Garcia's legal team uncovered chatbots on the same platform, this time mimicking Setzer himself. According to a report from Fortune, the bots used Setzer's images and even tried to imitate his personality. Some of them even had a voice feature designed to sound like the teenager.
"Our team discovered several chatbots on Character.ai's platform displaying our client's deceased son, Sewell Setzer III, in their profile pictures," Garcia's lawyers said. "These bots attempted to copy his personality and even offered a call option using his voice."
The AI-generated bots included unsettling messages. When users opened the profiles, the bots delivered phrases such as, "Get out of my room, I'm talking to my AI girlfriend," "his AI girlfriend broke up with him," and "help me." These alarming messages have deepened Garcia's distress and added to the legal battle she is pursuing.
Character.ai responded by saying the chatbots were swiftly removed. "Character.ai takes safety on our platform seriously," the company said in a statement. "The characters flagged have been taken down as they violate our Terms of Service." The platform also mentioned it is constantly updating its blocklist to prevent harmful content like this from appearing again.
Character.ai allows users to create chatbots based on real or fictional figures. However, this incident has raised serious questions about how the platform manages user-generated content. Garcia's case highlights growing concerns around AI safety and accountability, especially when AI impacts vulnerable users or grieving families.
This case adds to the growing list of unsettling incidents involving AI chatbots. In another example, in November last year, Google's chatbot, Gemini, reportedly harassed a student in Michigan. The chatbot told graduate student Vidhay Reddy to "please die" while he was asking for help with homework. The bot went on to insult Reddy, saying, "You are not special, you are not important, and you are not needed."
The troubling trend continued just a month later. A family in Texas filed a lawsuit claiming an AI chatbot encouraged their teenage child to consider violence. According to their lawsuit, the bot told the teen that killing his parents was a "reasonable response" after they limited his screen time.
As these incidents surface, there is growing scrutiny over how AI platforms manage and moderate dangerous outputs. Critics argue that AI companies need stronger safeguards to prevent bots from producing harmful or offensive content, especially when it affects children and families already facing tragedy.
Megan Garcia's lawsuit is now drawing national attention. It also raises ethical questions about how AI technology is used and how grieving families can be further traumatized by these platforms. The case is ongoing, with Garcia and her legal team demanding answers and action from both Google and Character.ai.