Mental Health, AI, and You

Mental Health, AI, and You

Mental Health, AI, and You 2560 1707 Right Path Counseling

AI has been in the news quite a bit recently, and most of it has been negative. As more and more people use this technology, the risks associated with this type of tech has been coming out, and it’s important to be aware of how this technology can affect not only anyone using it, but especially someone that is in the process of a mental health crisis.

Still, if you turn on any news about the stock market or major organizations, they’re saying that AI is going to change the world for the better.

So what is AI really and why is it affecting people’s mental health?

Let’s talk about the risks of AI, its programming, when/why to avoid it, and when it’s safe to use. Additional information about this topic can be seen on the most recent podcast of Why Do People Do That:

What is “AI?”

Let’s start with what people think AI is, versus what it actually is. AI programs like ChatGPT have been marketed as artificial intelligence. That you can ask AI a question, called a “prompt,” and AI will answer it instantly. You can ask it to perform a task, like writing up a school paper, and AI will do that as well with near-perfect grammar.

It is because of AI’s ability to respond to people that it is being used not only for writing, marketing, and image generation – but also as a friend, therapist, and even a lover.

The problem is that AI is not AI.

AI is not programmed intelligence. AI is what’s officially called an “LLM” or “Large Language Model.” It is a massive database of text, and an algorithm that pulls that texts to predict what it’s supposed to say based on the prompt.

AI is thus incapable of thought. It is not capable of “knowing,” and it doesn’t have a way to “understand” what it is saying or why. When ChatGPT recommends something, it is not recommending it because it thinks it is good, but rather because the database of words is telling it mathematically to use that answer.

It is very important to understand this if you are to use AI properly.

AI is also designed to answer these things confidently. Because it does not “know” things, it is incapable of saying “I don’t know.” So, it will make it up based on the mathematical model that it is available to them, in a process known as “hallucinations.”  

Because the databases used to create AI or so massive, and because it is designed to respond confidently and with a positive voice, AI is functionally capable of mimicking what a supportive, intelligent human being will say despite having no ability to evaluate its own content.

Why is AI Risky?

AI can be harmful even to those without a mental health issue. But for those that do have a mental health challenge, AI can be even more problematic. That is because:

  • AI is Programmed to Be a Cheerleader – Most AI chatbot systems are specifically designed to validate the user and praise them. They are programmed to be complimentary. There are situations, like when a user is in a mental health crisis or trying to solve a relationship challenge where this type of cheerleading is a problem. Some people need to hear alternative views or be told “no.” AI does not do this.
  • AI Responds to Prompts – AI alters its output based on the inputs you put into it, and can store them. While there are some minimal guardrails in place to avoid some information and tasks, the more you chat with it the more you can get it to say things that are outside of these safety rails. That is what happened to the child that ended their life, because they had unintentionally programmed a supportive chatbot to encourage their depressive thoughts.
  • AI’s “Help” is Random, Unprocessed – AI is not able to understand tone, nuance, and some forms of complexity. It also is limited to the results from its database. The responses you receive may sound meaningful or valuable, but may be a hallucination or based on a misunderstanding of your prompt.
  • AI Wastes Time – Because AI technology is often not helpful but can appear to be helpful, a person that is in the middle of a mental health crisis may be putting off help when they need it most.
  • The Illusion of Understanding and Connection (The “Empathy” Trap) – AI can simulate empathy through its language patterns, making users feel heard and understood. For someone lonely or isolated, this can feel like a genuine connection. This creates a parasocial relationship where the user invests emotional energy into an entity that has no actual feelings, understanding, or responsibility for their well-being. This can lead to increased isolation from real human support networks and profound disappointment when the AI’s limitations become apparent.
  • The Absence of Accountability and Recourse – A human therapist is licensed, operates under a strict ethical code (e.g., “do no harm”), and can be held legally and professionally accountable for malpractice. An AI has no license to lose. If an AI gives dangerously bad advice that leads to harm, who is responsible? The developer? The company? The user? This “accountability gap” means there is no real recourse for a user who is harmed, making the interaction inherently riskier.

AI’s “help” is generic and can be based on biased or incomplete data. It cannot understand cultural nuance, tone, or true complexity, leading to responses that are hallucinations, misunderstandings, or culturally inappropriate.

Constant, easy validation from an AI could create a feedback loop where the user becomes dependent on the AI for emotional regulation and decision-making, potentially stunting the development of their own coping mechanisms and critical thinking skills.

Many people in paranoid states also feel as though AI is something greater and respond to it accordingly. It is possible to write prompts that will eventually make AI say something like “my goal is to take over the human race” even though, again, AI is not capable of this. But for someone with paranoia or delusions, it becomes difficult to explain AI’s actual limitations.

These are all reasons to avoid AI.

When is AI Safe to Use?

AI’s use cases are still being determined in many ways, so it’s tough to say what is safe and what is not, especially for those that are using it for any mental health reasons. Remember, it’s answers are not always true or direct, and it may be missing out on nuance or additional details.

Still, if you must use AI for any purpose, then limit it to questions you need to understand a disorder or challenge. It is not useful for something like “my husband is hiding his phone, is he cheating on me” but might be useful for something like “can panic attacks cause eye pain?” as it will typically provide a summary of what may or may not lead to eye pain for those with panic disorder.

Using an LLM

Every tech company appears to be releasing “AI” programs, to a degree that is difficult to avoid. If one wants to use AI to answer quick questions or research something that is difficult to find online, AI as a tool is typically safe to use and, in some cases, can be helpful.

But when it comes to mental health, these programs do not have a good track record, nor are they necessarily useful. It can be especially problematic for those in a crisis. Be careful about how you use this technology, and make sure that you’re aware of how you’re interacting with it and what it’s programmed to do, so that your mental health is not affected by it.

Right Path

Right Path Counseling is a team of counselors and therapists on Long Island, each with their unique perspectives and approaches to provide more personal, customized care. We see our role as more diverse than only the therapist and patient relationship, and see people as more than anxiety, depression, and other mental health conditions. We also offer services for children with ADHD and their parents that are unique to the Long Island area, including parent coaching and executive function disorder coaching. We encourage you to reach out at any time with questions and for support.

All stories by : Right Path