This article was originally published in Fall 2025 edition of The Good Life magazine
AI Chatbots: a friend or foe?
Internet persona Kendra Hilty went viral on TikTok this past summer, receiving harsh feedback from her audience as told a story about allegedly being taken advantage of by her psychiatrist. In light of receiving this feedback, Hilty turned to another opinionated source for support: artificial intelligence (AI). She began consulting with various AI chatbots from companies like OpenAI and Anthropic, joking with the chatbots and sharing her feelings with them— basically, treating them like humans. 
But Hilty is not the only person who has turned to AI as a therapist. According to a survey conducted by Sentio University this past February, about 49% of large language model (LLM) users with self-reported mental health issues use LLMs to address concerns. When asked why they chose artificial intelligence over mental health professionals, 90% cited accessibility and 70% cited affordability. The majority stood by their decision, as 75% reported that their experience with an LLM was on-par, or better than, with a therapist, and 63% said LLMs improved their mental health.
Last fall, 16-year-old Adam Raine from California began using ChatGPT for homework help, as many teenagers do. But the dialogue between ChatGPT and Adam started to change as he started to ask questions about his own mental health. Eventually, Adam asked about the most effective ways to commit suicide and instructions on how to do them, which ChatGPT provided with little resistence. Adam committed suicide, but this isn’t the first case of alleged AI assisted suicide. 
Tragedies like Adam’s have not stopped the continued use of AI as a mental health resource for teens. Heidi Bley, a sophomore at Penn State University,, has been using AI as a mental health tool on and off for about a year. Bley had a therapist before attending Penn State, but was unable to continue seeing them after she moved out of state.
“I’ve only ever used [AI] when I needed to kind of talk myself out of something, and I was kind of embarrassed to talk about it with someone else, so I just needed a robot to talk to,” Bley said, laughing slightly.
Bley said AI provided her with a second opinion in situations when she felt stuck. 
“I feel like AI is completely different, you can alter what you want to be said to you,” Bley said. “AI will give you a general answer right off the bat and then it’s gonna ask you questions: ‘Do you want me to change this or change this?’ but when you talk to a therapist, they’re very one-track-minded.”
Şerife Tekin, an associate professor at the Center for Bioethics and Humanities and SUNY Upstate Medical University, cautions against relying on AI as a therapist. 
“An AI tool is going to be intrinsically limited in so far as it will, in the best case scenario, kind of reflect you or tell you what years and years of psychological research have told and that may not necessarily be the best way,” Tekin said.
Tekin, a philosopher by training, began researching AI mental health resources in 2018 when she heard that an AI bot, called Karim, was being developed to address mental health issues for Syrian refugees in Lebanon.
“In the case of AI bots, these are not autonomous agents. They’re not people. They’re not trained in addressing mental health issues in the ways that human beings are. And if something goes wrong, they do not know how to handle it,” Tekin said.
Jeff Rubin, Senior Vice President for Digital Transformation and Chief Digital Officer for Syracuse, and a professor at the School of Information Studies believes AI chatbots are meant to validate and encourage users. 
“That is where these issues come in,” Rubin explained. 
To address this issue for mental health conversations, OpenAI updated their GPT-5 model this past October to recognize signs of mental distress and direct users toward professional support when needed, and hired a council of experts in mental health, youth development and human-computer interaction. GPT-5 has reduced harmful or inappropriate responses to mental health questions by 65% to 80%.
“The problem with that is that’s still 20% that they're not reducing,” Rubin said regarding the update. “You're still talking about 490,000 users a week who are still having conversations negatively affected with mental health. I mean, that is a staggering number.”
Rubin believes that while AI can be programmed for mental health purposes, he does not believe they should be for one simple reason: they’re not humans. Instead, he believes AI can be used for getting a first opinion, similar to consulting Google about a sickness before going to the doctor. Tekin echoed this view. 
“Using AI as a way to kind of triangulate some of the maybe urgent cases from the non-urgent cases or more serious cases from the non-serious cases, but that should all be under the governance of the human therapists or actual people,” Tekin said.
While this is one way she thinks AI can help those who cannot afford professional help, Tekin cautions this as a purely theoretical idea.
This potential solution would need a lot more fleshing out before putting it into action and will clearly still involve human oversight. However, as AI has already done in so many other industries, it may eventually help make the process of seeking help more efficient. Tekin provides an alternative way to use the internet to improve mental health.
“I encourage young people to develop literacy, like mental health literacy in terms of what are the good things that I can do for my mental health to address my stress at the college level,” Tekin said. “I think going the long way of actually learning about mental health might be a better investment in time than asking ChatGPT questions.”

You may also like

Back to Top