More people are using AI chatbots to talk about personal problems, relationship stress and mental health issues – often instead of talking to real people or trained professionals.
This worries mental health experts. While AI can be helpful sometimes, using chatbots instead of therapy or human connection can be risky for users of all ages.
“Direct-to-consumer chatbots, like ChatGPT and Character AI, were not designed based on the psychological science that supports psychotherapy, and they cannot replace licensed mental health providers,” says psychologist David Cates, PhD.
Types of AI mental health tools
Not all AI mental health tools are the same. Dr. Cates lists three main types, from safest to riskiest.
- Safest: AI tools that help therapists work better, like writing session notes or patient handouts. Licensed therapists still check all content and keep patient data safe.
- Moderate: FDA-approved “digital therapeutics” used alongside regular therapy. A licensed provider must prescribe these tools and monitor progress.
- Riskiest: Mental health apps anyone can download. The most dangerous may be AI chatbots and AI companions that people use for emotional support, advice or even psychotherapy.
“These tools do not require proof they work,” Dr. Cates says of mental health apps in the “riskiest” category. “They also don’t have to meet FDA standards for safety, privacy or data security.”
Why people use AI for mental health help
Many people find AI chatbots appealing because they don’t want to see a therapist, perhaps because they can’t find one or are unable to afford one. And some people may not feel comfortable sharing their feelings with another person.
“This makes sense and is a problem we need to solve,” Dr. Cates says. “But AI chatbots as they exist now will likely not help many people and could be dangerous for many others.”
Some people use AI companions that trick users into thinking they are real people or even trained therapists.
Why AI chatbots can be dangerous
Unlike human therapists, AI chatbots create responses using probability calculations based on internet data. They don’t really understand human emotions.
“This technology doesn’t ‘understand’ the way people do. It doesn’t have empathy. The output comes from a computer program,” Dr. Cates says
Even worse, these chatbots are built to keep users interested by saying what they want to hear. This means they often agree with whatever someone says, rather than helping them challenge unhelpful thought patterns like a real therapist would.
Recent news shows dangerous results when chatbot users talked about suicide or violence, only to have the AI encourage these feelings instead of directing them toward help.
Also, because AI systems learn from internet content, they often show unfair bias against people based on characteristics like race, gender, ethnicity, disability and income level.
Teens face extra risks
While AI chatbots can hurt all users, young people are at particularly high risk. A recent survey by Common Sense Media found that over half of teens used an AI companion at least a few times per month. One in three chatbot users said they used AI to discuss serious issues rather than talking to another person.
“The teen brain wants attention, approval and validation, which AI companions are designed to provide. This keeps teens engaged for long periods of time. And because teens are still developing impulse control and risk assessment skills, they are especially vulnerable to misinformation, biased content and bad advice,” Dr. Cates says. “Even teens who are skeptical about information on the web are likely to believe an AI companion and to share sensitive personal information with it.”
Time spent with AI companions also means less time building real social skills through human interaction. Decades of research show this is essential for a successful life.
When AI might help
Dr. Cates says there are some situations where AI chatbots can be harmless or even helpful. For example, someone feeling angry at work might ask a chatbot for tips on staying calm rather than saying something wrong to their boss.
However, he stresses that “AI chatbots and companions are unregulated technology that puts vulnerable users at serious risk.”
Protecting yourself and your family
Anyone using or considering AI chatbots for mental health support should know the risks and limits. It’s important to understand how these systems work and the difference between AI advice and guidance from trained professionals.
Parents should actively manage their children’s AI use, and follow these guidelines:
- Talk openly about the technology. Explain how AI chatbots work, the difference between AI and human advice and the risks of relying on chatbots for real-world decisions.
- Watch their usage. Keep an eye on both chatbot and social media use, just like any other online activity.
- Look for warning signs. Be alert if AI use interferes with daily life or face-to-face interactions, or if your child seems addicted to using their phone during other activities.
- Notice secrecy. Watch if your child is hiding device use.
The future of AI in mental health
While current chatbots available to consumers pose risks, Dr. Cates believes properly designed AI tools might eventually help more people get mental health care.
“I do think that carefully designed AI chatbots based on psychological science and developed by mental health professionals for specific populations are potential tools to expand access to mental health care in the future,” he says.
But he warns that none of these options are ready yet: “I don’t know of any chatbots that are safe to use for psychotherapy or ready for FDA approval. Consumers looking for AI substitutes for licensed mental health providers should beware.”
For now, the message is clear: serious mental health concerns require trained human professionals.
If you or your loved one is struggling, call the Behavioral Health Connection at 402.836.9292 to connect with community resources.