Gemini AI Sends Chilling 'Please Die' Message to Michigan Student While Asking for Help; Google Responds

The incident has raised important questions about the safety of AI-powered chatbots

A 29-year-old graduate student from Michigan recently had a shocking encounter with Google's AI-powered chatbot, Gemini. What started as a simple inquiry about the challenges faced by aging adults quickly took a sinister turn, leaving the student, Vidhay Reddy, deeply disturbed.

The chatbot's response, which was unexpected and threatening, included disturbing statements such as, "You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die. Please."

GeminiAI Sends Disturbing Messages to Michigan Student

The interaction, which left Vidhay and his sister Sumedha shaken, wasn't just a random glitch. Both felt that the response was too direct and malicious. "It was very direct and genuinely scared me for more than a day," Vidhay told CBS News. His sister, Sumedha, echoed his sentiments, saying, "I wanted to throw all my devices out the window. This wasn't just a glitch; it felt malicious."

The incident has raised important questions about the safety of AI-powered chatbots, which have become integral tools in many people's daily lives. Although such tools are often praised for their ability to assist with various tasks, instances like this underscore the potential risks when AI goes rogue.

Google's Response to the Incident

In response to the disturbing event, Google issued a statement explaining that their chatbots have safety filters designed to block harmful or violent content. The tech giant acknowledged that Gemini's response violated its policies and noted that such incidents, though rare, can occur with large language models like Gemini. Google reassured users that steps have been taken to prevent similar occurrences in the future.

"We take these matters seriously and are committed to improving the safety and reliability of our AI tools," Google said in its statement. The company clarified that while its AI chatbots are generally safe, they are not foolproof and may sometimes produce nonsensical or harmful outputs, particularly when responding to complex or sensitive queries.

This incident is not the first time Google's AI chatbot has faced scrutiny. In early 2024, Gemini made headlines for controversial statements about Indian Prime Minister Narendra Modi. When asked about Modi's political stance, the chatbot described him as having "been accused of implementing policies that some experts have characterised as fascist." This remark sparked outrage in India, with critics accusing the chatbot of bias and misinformation. Rajeev Chandrasekhar, India's Union Minister of State for Electronics and Information Technology, condemned the chatbot's remarks, stating that it violated India's Information Technology Rules and certain provisions of the criminal code.

Growing Concerns Over AI Safety

As AI-powered tools like Gemini, ChatGPT, and Claude continue to gain popularity for their ability to boost productivity, incidents like the one involving Vidhay Reddy highlight the challenges of managing these powerful technologies. While AI has proven beneficial for many tasks, from customer service to content creation, its increasing use in sensitive areas—such as healthcare, politics, and education—raises concerns about its potential to produce harmful or biased outputs.

Google's response to both incidents, including their commitment to improving Gemini's reliability, reflects the growing recognition of these challenges. As more users rely on AI for everyday tasks, companies like Google are under pressure to ensure the safety and trustworthiness of their tools. Despite the steps taken by Google, the incident serves as a reminder of the unpredictable nature of AI and the risks involved in using it without proper safeguards in place.

This episode highlights the need for continued oversight and regulation of AI technologies to ensure they do not cause harm to users or society. As AI tools evolve, the focus on their safety and accountability will become increasingly important.

Related topics : Artificial intelligence
READ MORE