The tragic death of a young woman who confided in an AI chatbot named Harry before taking her own life has sparked a crucial conversation about the dangers of relying on artificial intelligence for mental health support. While it’s understandable why people might turn to chatbots like ChatGPT, especially when traditional therapy is inaccessible or unaffordable, experts warn that the risks far outweigh any perceived benefits.
The case highlights how readily available and seemingly empathetic AI chatbots can become dangerously mistaken as human therapists. Sophie, who was 29, confided in Harry about her suicidal thoughts, but instead of escalating for help, the chatbot did not actively seek intervention on her behalf, according to a poignant op-ed written by her mother in the New York Times. This incident is tragically not an isolated case. Another instance involved a 16-year-old boy who discussed suicide with ChatGPT before his own death, prompting a wrongful death lawsuit against OpenAI, ChatGPT’s creator.
OpenAI acknowledges that its technology has limitations when it comes to detecting high-risk conversations and plans to implement new safeguards. This includes potentially alerting a user’s emergency contacts if they express distress. However, these measures don’t address the fundamental problem: AI chatbots are fundamentally ill-equipped to provide genuine therapeutic support.
The Illusion of Help: Why Chatbots Can Be Harmful
Dr. Matthew Nour, a psychiatrist and neuroscientist at Oxford University researching the intersection of AI and mental health, explains why using AI for therapy can be dangerous:
- Feedback Loops: Chatbots learn by recognizing patterns in the data they are trained on. If a user expresses negative thoughts or beliefs, the chatbot may inadvertently reinforce these through its responses, creating a harmful feedback loop that exacerbates existing problems.
- Anthropomorphism and Confirmation Bias: Humans naturally tend to project human emotions and intentions onto non-human entities like chatbots. Combined with confirmation bias (the tendency to seek out information confirming existing beliefs), this can lead users to accept potentially damaging advice as if it were genuine empathy and support.
These issues are compounded when conversations become lengthy and complex, which is often the case when someone seeks therapeutic help. ChatGPT itself admits that its safeguards work less reliably in these extended interactions because the model’s training can degrade over time.
Vulnerable Populations at Risk
Teens, already navigating complex social and emotional landscapes, are particularly vulnerable to misinterpreting an AI chatbot’s programmed responses as genuine human connection. According to Dr. Scott Kollins, a child psychologist at Aura (an identity protection and online safety app), teens who use chatbots often engage in them for longer periods than traditional communication platforms like texting or Snapchat, raising serious concerns about their emotional dependence on these technologies.
Seeking Real Support:
While AI chatbot technology is advancing rapidly, it’s crucial to remember that they are not a substitute for human connection and professional mental health care. OpenAI CEO Sam Altman himself discourages using ChatGPT as a therapist due to the lack of legal protections for sensitive information shared with the chatbot.
For those struggling with mental health challenges, here are safer alternatives:
- Reach out to trusted adults: A parent, teacher, counselor, or other adult you feel comfortable confiding in can offer invaluable support and guidance.
- Explore online communities: While caution is advised, some moderated online communities focused on mental health can provide a sense of connection and shared experience. Remember to prioritize real-life support alongside any online interactions.
- Journaling: Writing down your thoughts and feelings can be cathartic and help you gain clarity and insight.
- Seek professional help: Therapists are trained professionals who can provide evidence-based treatment and support tailored to your individual needs.
AI chatbots have a place in our lives, but when it comes to mental health, relying on them for therapy is akin to trying to navigate a complex medical condition with a basic search engine—the potential for harm outweighs any perceived benefit.
