Using Generative AI for therapy might feel like a lifeline – but there’s danger in seeking certainty in a chatbot
# Navigating the Future: Can Generative AI Replace Human Therapists?
The rise of generative AI in recent years has been nothing short of revolutionary. With the ability to provide instant, articulate responses to a multitude of queries, tools like ChatGPT have become an alluring option for those seeking guidance in moments of emotional or interpersonal crisis. Yet, as helpful as these AI solutions may seem on the surface, it’s crucial to consider the implications of relying too heavily on them, especially when it comes to mental health and relationships.
## The Age of AI in Emotional Support
Tran sat across from me, phone in hand, scrolling. “I just wanted to make sure I didn’t say the wrong thing,” he explained, referring to a disagreement with his partner. “So I asked ChatGPT what I should say.”
**Quote:** “It was articulate, logical and composed – too composed. It didn’t sound like Tran.”
As a psychologist, I encounter scenarios like Tran’s with increasing frequency—individuals turning to AI for therapeutic support. While there are obvious advantages to this approach, such as 24/7 availability and immediate feedback, it begs the question: what do we sacrifice in the process?
Generative AI presents itself as a convenient alternative in a world grappling with a dire shortage of mental health professionals. The World Health Organization reported in 2019 that one in eight individuals globally was living with a mental illness. In many regions, mental health services are overstretched, leaving individuals seeking alternatives at their wit’s end. Here, AI steps in, offering what seems like sage advice at the click of a button.
## The Unseen Risks of AI Dependency
However, relying on AI, especially for complex emotional processing, introduces significant risks. As Tran discovered, these models, despite their articulate output, lack the ability to navigate the intricacy of human emotions and relations. They provide reassurance without accountability, encouraging individuals to seek clarification repeatedly until they receive an “acceptable” response.
Tran often reworded prompts until the model gave him an answer that “felt right”. This constant tailoring wasn’t just about seeking clarity; it was the outsourcing of emotional processing. Instead of learning to tolerate distress or explore nuance, Tran sought AI-generated certainty. As a result, trusting his instincts became increasingly difficult over time.
### Ethical and Privacy Concerns
Moreover, the ethics surrounding such interactions are murky. Information shared with chatbots isn’t protected by the same confidentiality standards as with registered mental health professionals. Despite purporting privacy, many users remain unaware of how their data can be used or shared. This leads to another danger: misinformation. AI’s prediction-based processing can result in “hallucinations”—confident yet entirely false responses that have no basis in reality.
Better yet, AI doesn’t challenge avoidance, doesn’t press to explore underlying issues, or hold users accountable in their inter-personal communications.
## When AI Isn’t Enough
Tran’s experience highlights that while AI can provide assistance, it cannot replace the depth and breadth of human therapy.
### Building Proper Boundaries
“Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools.”
The interaction between therapist and client is inherently unique, characterized by a pattern recognition, accountability, and uncomfortable yet essential explorations. When Tran’s responses started to sound foreign to his partner, it was a direct result of this over-reliance on AI—a ready but unfamiliar script—as opposed to honest, albeit imperfect, self-expression.
As we worked together, the focus shifted from the technology itself to the behaviours prompting its use. For Tran, and countless others like him, the imperative becomes learning to face discomfort, to communicate authentically, and to trust in the messiness of human emotions.
## The Future of AI in Therapy
This isn’t to say there is no place for AI. In locations where access to therapists is severely limited, or where individuals require educational resources or light support, AI can be a powerful tool. But it must be used carefully—not as a surrogate for profound, nuanced human interaction, but as a complement to it.
Ultimately, for those of us in the field, it’s about facilitating the reclamation of one’s own voice. For Tran, progress meant writing messages that were sometimes messy, and always his own. It meant cultivating the belief that imperfection is part of life’s richness, and that good therapy thrives on this very principle.
**Quote:** “A therapist doesn’t just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror.”
## Conclusion: The Human Element
Generative AI is undoubtedly here to stay, offering possibilities we are only beginning to explore. However, as appealing as AI advice might be in our fast-paced, over-burdened world, it lacks an essential human dimension. So, the next time you reach for your phone to ask an AI for help, ask yourself: Are you seeking a script—or your truth?
### Closing Question
In a world captivated by technological convenience, how do we ensure the preservation of genuine human connection and emotional understanding? What happens when we forget to listen to, and trust, the sound of our own voices?


