5-Min Science: The Promise and Peril of AI in Mental Health
- Zachary Meehan
- Apr 15
- 6 min read
Updated: 7 days ago

As the mental health crisis deepens globally, artificial intelligence (AI) has emerged as a powerful, scalable tool to extend access to care. In two recent articles—“Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots” by Khawaja and Bélisle-Pipon (2023), and “Can AI replace psychotherapists? Exploring the future of mental health care” by Zhang and Wang (2024)—the role of AI in therapy is examined from complementary perspectives.
Both works grapple with AI's potential to supplement therapeutic services while also scrutinizing its limitations, especially in relation to empathy, ethics, and the nature of the therapeutic relationship. Together, these articles present a nuanced and, at times, critical exploration of what it means to integrate AI into mental health care and how we might do so responsibly.
AI’s Promise in Mental Health Care
Both articles agree that AI mental health tools, especially chatbots like Woebot and Wysa, represent a breakthrough in terms of accessibility and convenience. These tools provide on-demand, 24/7 support, which is especially valuable for populations with limited access to traditional mental health services, such as people in remote areas, low-income communities, or those wary of social stigma (Khawaja & Bélisle-Pipon, 2023; Zhang & Wang, 2024).
Users can engage with these platforms anonymously, receive guidance based on cognitive behavioral therapy (CBT) principles, and track emotional trends over time. AI’s ability to process vast amounts of user data allows it to personalize interactions and identify risk patterns, such as early signs of depression or anxiety, potentially intervening before a crisis occurs (Zhang & Wang, 2024).
The authors also converge in recognizing that AI can ease the burden on overextended mental health systems. Zhang and Wang (2024) point out that AI could serve as a "first line of triage," filtering cases and referring high-risk users to human therapists when necessary. Khawaja and Bélisle-Pipon (2023) take a slightly more cautious stance, noting that while AI can augment access, it should never be mistaken for equivalent to professional therapy.
Therapeutic Misconception and the Illusion of Empathy
The concept of therapeutic misconception is a central theme in the work of Khawaja and Bélisle-Pipon (2023). They define it as the mistaken belief that an AI chatbot provides the same kind of care as a licensed human therapist. This is more than a simple misunderstanding—it reflects a deeper psychological phenomenon in which users anthropomorphize chatbots, interpreting their pre-programmed empathy as real concern. Zhang and Wang (2024) echo this concern, warning that while AI can simulate empathy through natural language processing and emotional tone, it lacks consciousness and genuine emotional understanding. They emphasize that AI cannot “feel” or engage in a reciprocal emotional relationship—an essential component of therapeutic healing.
Interestingly, while both articles agree on the fundamental limitations of AI empathy, Zhang and Wang (2024) are slightly more optimistic about its future development. They suggest that AI might one day approximate empathy more effectively with more sophisticated machine learning and emotionally intelligent programming, though they concede this is still speculative. Khawaja and Bélisle-Pipon (2023) are more skeptical, arguing that any such simulation is performative rather than relational, and cannot replace the core human qualities—like warmth, responsiveness, and moral judgment—that define therapy.
Bias, Safety, and Ethical Design
Another area of strong alignment between the two articles is their shared concern over bias in AI systems. AI chatbots are trained on data that may reflect societal prejudices, leading to biased or inappropriate responses. Khawaja and Bélisle-Pipon (2023) highlight how marginalized users—particularly those who are BIPOC or LGBTQ+—may receive inaccurate or even harmful feedback if the AI has not been properly trained on diverse and inclusive data. Zhang and Wang (2024) expand on this point by discussing algorithmic opacity: most AI tools are "black boxes," meaning even developers can’t always predict how they will behave in novel situations. This unpredictability poses serious ethical risks when dealing with users in distress.
Both articles agree on the need for robust ethical design and oversight. They advocate for transparency in how AI tools are marketed and implemented. Developers and healthcare providers must be clear that these tools are support mechanisms, not replacements for therapy. Khawaja and Bélisle-Pipon (2023) particularly stress the importance of “honest marketing,” warning that overstated claims can mislead users and foster therapeutic misconception. Zhang and Wang (2024), while endorsing transparency, also call for standardized regulations and ethical review boards to monitor the development and deployment of AI in clinical settings.
The Irreplaceable Role of Human Therapists
While the tone of the two articles differs—with Zhang and Wang (2024) more focused on AI’s potential, and Khawaja and Bélisle-Pipon (2023) emphasizing caution—both conclude that human therapists remain central to effective mental health care. The therapeutic alliance, built on mutual trust, emotional attunement, and interpersonal connection, cannot be fully replicated by AI. Khawaja and Bélisle-Pipon (2023) caution that overreliance on AI tools could discourage people from seeking human help, especially if early chatbot experiences are unhelpful or dismissive. Zhang and Wang (2024), while acknowledging these risks, propose a hybrid model in which AI supports therapists by handling routine assessments, monitoring mood logs, and even assisting in personalized treatment planning.
In this vision, AI acts as an assistant—never the therapist itself. This model not only protects the core human values of therapy but also leverages AI’s strengths in efficiency and scalability. Both articles stress the importance of human oversight, particularly in monitoring how users engage with AI and ensuring that high-risk cases are escalated to professional care.
Conclusion
AI mental health tools offer real promise in making care more accessible, affordable, and scalable. But this promise comes with caveats. Without clear boundaries, ethical oversight, and realistic expectations, AI chatbots may foster therapeutic misconception, fail to provide meaningful support, or even cause harm—particularly to vulnerable populations. As Khawaja and Bélisle-Pipon (2023) and Zhang and Wang (2024) both argue, AI should be viewed not as a substitute for therapy, but as a complement to it. When thoughtfully designed and ethically deployed, these tools can play a powerful role in modern mental health systems. But the heart of therapy—genuine empathy, relational connection, and moral discernment—remains uniquely human.
Key Takeaways
AI in mental health care can increase accessibility and offer scalable support, particularly through chatbots and virtual therapists.
Users often mistake these tools for true therapy, leading to what researchers call therapeutic misconception.
While AI can simulate empathy, it cannot replace the emotional depth and ethical reasoning of a human therapist.
Bias, lack of transparency, and data privacy remain serious concerns in the development and use of AI mental health tools.
Both articles advocate for careful oversight, ethical design, and clear communication to ensure that AI supports—rather than replaces—human-centered mental health care.
Glossary
algorithmic opacity: the phenomenon where the decision-making processes of AI systems are not transparent or understandable, even to their developers.
artificial intelligence (AI): computer systems designed to simulate aspects of human intelligence, such as understanding language or making decisions.
bias: systematic errors in AI behavior resulting from biased training data, potentially leading to unfair or harmful outcomes.
chatbot: a software application that interacts with users via conversation, often used in customer service or mental health support.
empathy: the ability to understand and share the feelings of another, considered vital in effective psychotherapy.
human oversight: the involvement of trained professionals in monitoring AI tools to ensure safety, accuracy, and ethical use.
therapeutic misconception: the mistaken belief that an AI chatbot is equivalent to a human therapist.
References
Khawaja, Z., & Bélisle-Pipon, J. C. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186. https://doi.org/10.3389/fdgth.2023.1278186
Zhang, Z., & Wang, J. (2024). Can AI replace psychotherapists? Exploring the future of mental health care. Frontiers in Psychiatry, 15, 1444382. https://doi.org/10.3389/fpsyt.2024.1444382
About the Author
Zachary Meehan earned his PhD in Clinical Psychology from the University of Delaware and serves as the Clinic Director for the university's Institute for Community Mental Health (ICMH). His clinical research focuses on improving access to high-quality, evidence-based mental health services, bridging gaps between research and practice to benefit underserved communities. Zachary is actively engaged in professional networks, holding membership affiliations with the Association for Behavioral and Cognitive Therapies (ABCT) Dissemination and Implementation Science Special Interest Group (DIS-SIG), the BRIDGE Psychology Network, and the Delaware Project. Zachary joined the staff at Biosource Software to disseminate cutting-edge clinical research to mental health practitioners, furthering his commitment to the accessibility and application of psychological science.

Support Our Friends




Comments