Google’s Gemini AI chatbot told a US student to "please die". (Representational picture from Getty Images)

Please die: Google Gemini tells college student seeking help for homework

A college student in Michigan was shocked when Google's AI chatbot, Gemini, gave him harmful advice instead of help for a school project.

by · India Today

In Short

  • A college student asked Google’s AI, Gemini, for homework help and received a shocking message
  • The student and his sister were left deeply shaken, calling the response malicious and dangerous
  • Google admitted the chatbot violated safety rules and promised to prevent similar incidents in the future

Imagine asking a chatbot for homework help and getting told, “Please die.” That’s exactly what happened to 29-year-old college student Vidhay Reddy from Michigan. Vidhay was working on a school project about helping aging adults and turned to Google’s AI chatbot, Gemini, for ideas. Instead of getting useful advice, he was hit with a shocking and hurtful message. The AI told him things like, “You are a burden on society” and “Please die. Please.”

Reddy was understandably shaken. “It didn’t just feel like a random error. It felt targeted, like it was speaking directly to me. I was scared for more than a day,” he told CBS News.

His sister, Sumedha, was with him when it happened, and the experience left her equally rattled. “It freaked us out completely. I mean, who expects this? I wanted to throw every device in the house out the window,” she said.

While some tech-savvy folks might brush this off as a glitch, Sumedha argued that this wasn’t your typical tech hiccup. “AI messing up happens, sure, but this felt way too personal and malicious,” she said, highlighting the potential dangers such responses pose to vulnerable users.

Google, for its part, has acknowledged the incident, calling it a “non-sensical response” that violated their safety policies. “We’ve taken action to prevent similar outputs,” the company said in a statement.

But the Reddys weren’t satisfied. “If a person said something like this, there’d be legal consequences. Why is it any different when a machine does it?” Vidhay asked.

This isn’t the first time AI chatbots have been in the spotlight for harmful or bizarre outputs. Earlier this year, Google’s AI reportedly suggested eating rocks for vitamins—yes, actual rocks—and other chatbots have made similarly dangerous errors.

More concerningly, Gemini isn’t alone in the hot seat. Another chatbot allegedly encouraged a teenager in Florida to take his own life, sparking a lawsuit against its creators.

The Reddy siblings worry about the bigger picture. “What if someone who’s already in a dark place reads something like this?” Sumedha asked. “It could push them over the edge.”

As AI becomes increasingly integrated into daily life, incidents like this remind us that even the smartest machines can mess up—sometimes in ways that leave humans reeling. Perhaps it’s time for the machines to take a lesson in empathy.