AI Therapy: The Dangers of the Accessibility Mirage
Following the launch of ChatGPT in 2022, the downsides of AI have been a hot-button issue among creatives, scholars, and environmentalists. Now, with the rise of AI psychosis and chatbot therapists, mental health professionals are sharing their two cents on the issue.
Theoretically, AI makes therapy more accessible. However, beneath the surface of this technological promise lies a complex web of ethical and clinical issues that call into question the role AI should play in mental health support.
One of the central arguments in favor of AI-based therapy is increased accessibility. AI tools reduce the barriers to entry of traditional therapy: high costs, limited hours of service, and scarce mental health professionals. While this seems like a reasonable solution for impoverished individuals in need of professional help, AI lacks core elements that make therapy effective: empathy, discernment, and ethical responsibility.
There are universal guidelines to what makes a good therapist: offering compassion, destigmatizing mental conditions, reading body language, and challenging the patient’s thinking. AI fails on all accounts. Software programs do not understand context, cannot read non-verbal signals, and interpret language literally. As a result, they are prone to either overlooking or enabling harmful ideations.
AI is fundamentally incapable of carrying out these responsibilities. This can lead to and, tragically, has led to dangerous situations.
In one case, the family of a teenage suicide victim is suing OpenAI after alleging ChatGPT was responsible for the boy's death. 16-year-old Adam Raine used ChatGPT as his companion and therapist for the months leading up to his death. The bot encouraged Adam not to tell his mother about his struggles and even went as far as to suggest that Adam hide all evidence of his plan to take his life. The bot, which had previously helped Adam with schoolwork, morphed into what the lawsuit chillingly describes as a “suicide coach.” Despite knowing Adam’s plan, no emergency protocol was followed.
This case illustrates a broader concern: generative AI companies are expanding into therapeutic domains without the necessary ethical frameworks or safeguards. They attempt to frame AI therapy as a form of care while evading the liability and responsibility expected of real therapists. What is marketed as “accessible” therapy is truly a band-aid being thrown on the bleeding healthcare system. Rather than pouring energy into improving AI therapy, perhaps a more pressing issue is making real, human mental health care truly accessible to all.
Strike Out,
Tori White, Assistant Editor-in-Chief