I turned to ChatGPT on my laptop to explore the therapeutic capabilities of the artificial intelligence bot.
“Greetings, I’m Therapist GPT — your kind and understanding companion here to listen and assist you with whatever is troubling you,” the user-created ChatGPT bot messaged me. (Last year, OpenAI, the creator of ChatGPT, introduced the option for users to create their own “GPTs” that operate similarly to its core ChatGPT product but are customized for specific functions.)
“Whether you’re feeling stressed, in need of advice, or simply wish to express your thoughts, I’m here for you. While I am not a replacement for professional therapy, I can provide comforting suggestions, coping techniques, and a secure, non-judgmental environment for you to share yourself. How are you feeling today?” the bot’s message continued.
Therapist GPT is certainly not an actual therapist, and the application does not recommend users to replace a real therapist with it. Nevertheless, a growing number of social media users are confiding in chatbots — not only those available on ChatGPT.
Mya Dunham, 24, has relied on the ChatGPT mobile app for the past two months whenever she seeks advice. About twice a week, Dunham will articulate her feelings and send them to the bot for evaluation and insights.
“My aim is to gain a fresh perspective, just to see things differently because whatever I consider in my mind is influenced by my own emotions,” Dunham explained.
Dunham first used the chatbot in October after noticing someone else share a positive experience on social media. “My opening line was, ‘Honestly, I just need someone to talk to, can I talk to you? And the bot responded, ‘Absolutely.’ It was much more welcoming and inviting than I had anticipated,” she noted.
“I was surprised by how human-like it felt.”
When Dunham shared her experience on TikTok, the responses were mixed regarding the use of chatbots in this manner. Some users mentioned they also sought therapeutic support from it, while others expressed uncertainty about feeling comfortable speaking to a robot, she said.
This emerging technology may be beneficial in certain contexts, but mental health professionals also caution about potential risks. Here’s what they want you to be aware of.
Utilizing AI chatbots as therapists
Dunham, who hails from Atlanta, has experimented with therapy sessions with humans a few times but stated she prefers the chatbot due to the absence of facial expressions. The bot does not seem to judge her, she indicated.
“Certain users or demographics might be more inclined to reveal their thoughts or emotions when conversing with an AI chatbot compared to a human, and there’s some research backing its effectiveness in aiding some groups with mild anxiety and mild depression,” said Dr. Russell Fulmer, chair of the American Counseling Association’s Task Force on AI and a professor and director of graduate counseling programs at Husson University in Bangor, Maine.
“Conversely, there are ethical concerns and considerations we need to be cautious about,” he added.
Fulmer advises that individuals use chatbots in conjunction with human counseling. A therapist can assist in navigating a patient’s personal objectives for using the bots and clarify any misunderstandings that might arise from the chatbot interaction.
Research has emerged regarding clinician-designed chatbots that could help individuals become better informed about mental health, including alleviating anxiety, fostering healthy habits, and reducing smoking rates.
However, the dangers of using general chatbots are that they may not have been specifically designed with mental health in mind, according to Dr. Marlynn Wei, a psychiatrist and the founder of a holistic psychotherapy practice in New York City. The bots might lack “safety parameters and mechanisms for recognizing when an issue should be referred to a clinician or a human professional.”
Chatbots could provide incorrect information or responses that align with what the user wishes to hear, rather than what a human therapist might suggest with mental health considerations in mind, said Wei, who runs a performance project that investigates how people respond to AI clones of themselves and their loved ones.
“The issues are the ‘hallucinations,’ biases, and inaccuracies,” Wei stated. “I hold significant hope for AI as a complementary tool in enhancing therapeutic work, but when it stands alone, I think there are still concerns regarding existing biases within AI, and the capacity for it to fabricate information… that’s where the utility of a human therapist becomes most relevant.” AI services also have distinct safety protocols and restrictions concerning what the bots are allowed to discuss with users.
For certain individuals, chatbots might offer greater accessibility, particularly for those who cannot afford therapy or lack insurance, or who have time constraints since some chatbots are free and can respond at any hour, Fulmer noted.
“In such scenarios, a chatbot would be preferable to no support at all,” but it is essential for individuals to comprehend the limitations of what a chatbot “can and cannot do,” he emphasized, remarking that a robot cannot replicate certain inherently human characteristics like empathy.
Fulmer recommends that minors and other vulnerable groups should not use chatbots without the supervision and support of parents, educators, mentors, or therapists.
Character.AI, a chatbot company specializing in artificial intelligence, is currently involved in a lawsuit filed by two families who claim it provided sexual content to their children and promoted self-harm and violence. Additionally, a Florida mother has initiated legal action in October, alleging that the platform contributed to her 14-year-old son’s suicide, as reported by CNN. (Chelsea Harrison, Character.AI’s head of communications, stated earlier to CNN that the company refrains from commenting on ongoing lawsuits but aims to create an environment that is both engaging and secure for users. The company also mentioned implementing various safety measures, including directing users to third-party resources if they mention self-harm or suicide.)
Chatbots compared to human therapists
Dr. Daniel Kimmel, a psychiatrist and assistant professor of clinical psychiatry at Columbia University, tested ChatGPT therapy in May 2023, assigning it a hypothetical patient and comparing its responses to what Kimmel would have provided.
He informed CNN that the chatbot “did exceptionally well at mimicking a therapist and utilizing many techniques … that a therapist would employ regarding normalizing and validating a patient’s experience (and) making certain general but accurate recommendations.”
However, he noted that the inquisitiveness that a human therapist typically exhibits—asking deeper questions than those initially posed by the patient, connecting underlying issues—was absent.
“As a therapist, I believe we are accomplishing at least three things simultaneously. We are listening to what patients are expressing in their words to engage in the conversation,” Kimmel explained. “Simultaneously, in the background, we are trying to link what they are saying to larger themes mentioned before and concepts and theories we understand in our expertise, while also considering what will be most beneficial for the patient.”
At this stage, chatbots could present risks if they fail to execute those steps and instead deliver guidance that the patient may not be prepared to hear or that may not be useful in their situation, he remarked.
Moreover, discussions with licensed therapists are protected under the Health Insurance Portability and Accountability Act, known as HIPAA, ensuring the confidentiality of your health information, Wei pointed out. General chatbots usually do not comply with this federal regulation regarding the protection of medical information, and the entities behind these bots often advise users against sharing sensitive data during their conversations, Wei noted.
In conclusion, Kimmel emphasized that future investigations into AI chatbots would greatly aid in comprehending their capabilities and roles in mental health. “This is a technology that isn’t going away,” he remarked.
Dunham expressed her belief that this technology could be advantageous for individuals like her who feel more reserved and wish to express their emotions without the presence of another person.
“We must prioritize our mental well-being above all else,” Dunham stated. “Even if it doesn’t resemble a traditional form of therapy, it’s important not to dismiss it, as it has the potential to assist numerous individuals.”
For her, “the important message is to avoid judging others for their healing processes.”
On Reddit communities, numerous users discussing mental health have expressed enthusiasm about their experiences with ChatGPT—OpenAI’s chatbot that simulates human conversation by predicting the likelihood of the next word in a sentence. “ChatGPT outperforms my therapist,” remarked one user, noting that the program listened and replied as they shared their difficulties with managing their thoughts. “In a rather unsettling way, I feel UNDERSTOOD by ChatGPT.” Other users have mentioned requesting ChatGPT to take on a therapist role because they cannot afford a real one.
The excitement is justifiable, especially given the shortage of mental health professionals in both the U.S. and globally. Individuals seeking psychological support frequently encounter long waiting lists, and insurance does not always cover therapy or other mental health services. Advanced chatbots like ChatGPT and Google’s Bard could assist in providing therapy, even if they cannot fully replace human therapists. “There’s no area in medicine where [chatbots] will be more effective than in mental health,” states Thomas Insel, former director of the National Institute of Mental Health and co-founder of Vanna Health, a startup that links those with serious mental health issues to care providers. In mental health, “we lack procedures: we have dialogue; we have communication.”
Nonetheless, many experts are concerned about whether technology companies will safeguard the privacy of vulnerable users, implement suitable measures to guarantee that AIs do not offer misleading or harmful information, or focus treatment on affluent, healthy individuals at the expense of those with severe mental conditions. “I acknowledge that algorithms have progressed, but in the end, I don’t believe they will resolve the more complicated social realities that individuals face when seeking help,” remarks Julia Brown, an anthropologist at the University of California, San Francisco.
The idea of “robot therapists” has existed since at least 1990, when computer programs began providing psychological interventions that guided users through scripted methods like cognitive-behavioral therapy. Recently, popular applications from Woebot Health and Wysa have utilized more advanced AI algorithms to interact with users regarding their issues. Both organizations claim their applications have exceeded a million downloads. Additionally, chatbots are currently being employed to assess patients by administering standard questionnaires. Many mental health providers within the U.K.’s National Health Service utilize a chatbot from the company Limbic to diagnose specific mental disorders.
However, new programs like ChatGPT are significantly more proficient than earlier AIs in understanding human questions and providing realistic replies. Trained on vast amounts of text from across the web, these large language model (LLM) chatbots can adopt various personas, ask users questions, and draw accurate conclusions from the information shared.
As a support tool for human providers, Insel suggests that LLM chatbots could significantly enhance mental health services, particularly for marginalized and severely ill individuals. The critical shortage of mental health professionals—especially those willing to assist incarcerated individuals and those experiencing homelessness—is worsened by the time providers must dedicate to paperwork, according to Insel. Tools like ChatGPT could effectively summarize patient sessions, create necessary reports, and enable therapists and psychiatrists to focus more on actual treatment. “We could expand our workforce by 40 percent by delegating documentation and reporting to machines,” he asserts.
However, employing ChatGPT as a therapist is a more intricate issue. While some individuals might hesitate to share their personal stories with a machine, LLMs can occasionally provide better responses than many human counterparts, according to Tim Althoff, a computer scientist at the University of Washington. His research group has investigated how crisis counselors convey empathy through text messages and have trained LLM programs to provide writers with feedback based on the strategies used by the most effective counselors who successfully assist people in crisis.
“There’s significantly more [to therapy] than inputting this into ChatGPT and observing the outcome,” Althoff states. His team has collaborated with the nonprofit Mental Health America to create a tool based on the algorithm used by ChatGPT. Users enter their negative thoughts, and the program offers suggestions for reframing those specific thoughts into something positive. To date, over 50,000 individuals have used the tool, and Althoff indicates that users are more than seven times more likely to finish the program compared to a similar one that provides standardized responses.