AI and mental health – am I about to be replaced by a chatbot?

Posted: 02/05/2023

It’s a question many of us are asking – how are the new forms of AI going to impact on the work we do? While we won’t be able to adequately cover this very large question in this very short blog, I thought we’d start by exploring how generative AI chatbots could impact on future mental health services.

Mental health chatbots have been around for a while (Wysa, Heyy, Woebot etc.) so why are we so worried about this now? The truth is, AIs like OpenAI’s ChatGPT and Google’s newly released, Bard, are different to the older AIs as they are generative. What this means is that the mental health chatbots we are familiar with are generally pre-programmed with specific questions and responses, while chatbots like ChatGPT respond with generative text, which is original and unique, based on data and information they compile from across the web, books, scientific papers and other sources.

If you’ve used ChatGPT for any length of time, you will have realised that you can have quite in-depth, complex and nuanced conversations with it. It will also personalise its responses to your responses. Strangely, this can feel like it is building rapport with you and the conversation can feel quite genuine.

Additionally, it can use specific therapeutic approaches, respond in many different languages and draw on huge numbers of sources to articulate the most effective advice and treatment options available for a particular issue or challenge.

It generates responses in less than 30 seconds and can talk with millions of people at once at any time of the day or night without breaking a sweat.

I recently asked ChatGPT4 how it would structure a therapeutic conversation with a human, and this was its response:

‘As an AI language model, I am not a qualified counselor and cannot replace a human counselor. However, if we assume that AI counseling services have been developed to a sufficient degree and that a person has voluntarily opted for a counseling session with an AI chatbot, here are some ways a session involving ChatGPT4 could unfold:

  1. Introduction: The chatbot would introduce itself and establish rapport with the human by asking their name and some general questions about their current state of mind.
  2. Assessment: The chatbot would then ask some targeted questions to gain a better understanding of the person’s mental health concerns, such as their symptoms, their triggers, their support system, etc.
  3. Active Listening: The chatbot would demonstrate active listening skills by reflecting back on the person’s feelings and thoughts, acknowledging their concerns, and expressing empathy.
  4. Goal-setting: Based on the information gathered during the assessment, the chatbot would help the person set goals for the session and determine what they would like to achieve from the counseling session.
  5. Guided conversation: The chatbot would engage the person in a guided conversation to explore their feelings and help them gain insights into their challenges. This could involve using open-ended questions, reflecting on their emotions, and offering insights or perspectives.
  6. Skill-building: The chatbot could also offer skills and techniques for managing stress, anxiety, or other issues. This could include cognitive-behavioral techniques, mindfulness exercises, or breathing exercises.
  7. Conclusion: The chatbot would conclude the session by summarizing what was discussed, asking if the person has any further questions or concerns, and offering resources or referrals for additional support if needed.

It is important to note that while AI counseling services may be a useful tool for some, they cannot replace the benefits of human connection and support from a qualified therapist or counselor.’

Sounds like it has a reasonable grip on the whole counselling process, in theory at least.

In fact, people are already using generative chatbots for counselling, with mixed results. Some people have reported finding them really helpful for things like relationship advice, while others have suggested they can be harmful (one man in Belgium reportedly committed suicide after using a chatbot for several weeks). Most experts seem to agree that for moderate to severe mental health issues, these tools could be potentially harmful if not used under the supervision of a mental health professional.

As we’ve already seen, there are many potential risks and challenges we need to think through before we start to rely on these new AIs as they are really in their infancy when it comes to mental health support.

The pros and cons

Let’s start with the cons.

Instead of asking human experts about what they believe the risks and limitations of chatbot mental health tools are (there are a number of great articles out there on this topic), I thought I’d ask ChatGPT4. Here’s a list of the top 10 issues and limitations ChatGPT4 believes exist with current chatbot mental health tools:

  1. Limited empathy: Chatbots lack the emotional intelligence and empathy of human therapists, which can be important in supporting people with mental health issues.
  2. Limited scope: Chatbots may not be able to provide the same level of support as human therapists, particularly for complex mental health issues.
  3. Misdiagnosis: Chatbots may not be able to accurately diagnose mental health conditions, potentially leading to misdiagnosis and inappropriate treatment.
  4. Overreliance: Users may become overly reliant on chatbots for mental health support, potentially neglecting other support options.
  5. Privacy concerns: Chatbots collect user data, raising concerns about privacy and data security.
  6. Limited feedback: Chatbots may not be able to provide the same level of feedback and guidance as human therapists, potentially limiting their effectiveness.
  7. Technical issues: Chatbots can be prone to technical issues, which can interrupt or interfere with mental health support.
  8. Lack of regulation: Chatbots are not currently regulated in the same way as human therapists, raising concerns about quality and safety.
  9. Lack of personal connection: Chatbots lack the personal connection and rapport that can be important in supporting people with mental health issues.
  10. Ethical concerns: The use of chatbots in mental health support raises ethical concerns around the use of technology in healthcare and the potential for harm to users.

While we can’t say that AIs like ChatGPT4 are in any way self-aware, this list is eerily self-reflective and provides a useful summary of some of the concerns we have regarding chatbot mental health tools.

I would add a few more to this list of potential risks. As these AIs are programmed by humans, they will inevitably have inherent, unconscious biases built into them (but then, so do humans).

We are social animals and evidence consistently suggests that human connection is a protective factor in terms of mental health, so over-reliance on technology for emotional support is a risk. In a world focussed on cost-cutting and the bottom line, the availability of mental health chatbots could also undercut the value of human forms of support.

Also, chatbots can sometimes be a bit glitchy (there are reports of them creating references that don’t exist and giving false or outdated information at times) which is a problem. On their website, the creators acknowledge this problem, stating:

‘While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.’

But there are also many potential benefits.

Many mental health services and supports are currently over-subscribed and often expensive and difficult to access – these tools could offer interim support to people. The sessions can be in many different languages, in any time zone at any time, be offered at low cost, increase accessibility and there is potentially no limit to the number of sessions available – people could have multiple sessions a day and they could be any length.

These chatbot apps could also be used to collect, and analyse, an enormous amount of non-identified data to help us improve services.

Additionally, you can ask ChatGPT to take on a ‘personality’, so people could request a counselling ‘avatar’ with traits they feel comfortable with, which could assist with rapport building, e.g. their counselling avatar could be/have a specific gender, age, cultural background, accent, religious values, etc. (Wysa already uses a simulated human voice that is almost indistinguishable from a real voice, helping people to feel connected.)

ChatGPT can also adopt any therapeutic modality, so people could request a particular approach, or a human expert could pre-program a counselling conversation using a particular therapeutic model.

Currently, counselling services are incredibly stretched and burnout levels are high. If these services and supports are developed with appropriate ethical and safety parameters in place, they could reduce the pressure on the system and alleviate high levels of stress on human service workers.

So, what does this mean for social workers, counsellors and psychologists?

In some ways, things are changing so quickly in this space that it is hard to know the extent and reach of the changes (which is why Elon Musk, Steve Wozniak and thousands of others have requested a 6 month pause on the development of generative AIs so that the risks and impact can be better assessed). However, we can at least start to explore some of the questions being raised by the development of these new AIs.

Experts believe that the vocations that will be most impacted by these new AIs are knowledge or information workers. While social workers, counsellors and psychologists are often very ‘hands on’ in their work, and therefore may be less impacted than other information-based industries, it is clear that AIs will be used to either supplement or in some cases, replace some therapeutic services.

Where to from here?

The reality is that AIs seem to be here to stay.

As professionals in human services, we need to make sure we are using safe and ethical AI powered tools. If we can find ways to ensure that these tools are effective and safe, there are many potential benefits for us, and the people we work with. To summarise:

  • We may be able to offer more timely, effective, affordable support for people while they are waiting to access our services or to enhance services.
  • These chatbots and apps may be able to educate people about their mental health and help them develop tools and skills for monitoring and managing their mental health long term.
  • They may also provide us with vital data that we can incorporate into our practice and inform best practice more broadly.
  • And crucially, they are scalable, so they could help take pressure off counselling and mental health systems that are collapsing under unprecedented levels of demand.

Chatbots are a tool—if used properly, and appropriately regulated, they could be an incredibly useful addition to our toolkits. If you’re thinking about how to incorporate AI-based mental health tools into your work, here are a few questions you might ask yourself:

  • How could these tools enhance, or undermine, the services we currently offer people?
  • How could we use these tools to take pressure off our service and teams?
  • What checks and parameters do we need to have in place to ensure these tools are safe and ethical?
  • What do we need to know to feel confident we can advise people on how to use therapeutic AIs in ways that are beneficial and safe?

What are your thoughts about the use of AIs in supporting people to manage their mental health? Whether you’ve had positive or negative experiences, we’d love to hear from you in the comments below.

4 responses to “AI and mental health – am I about to be replaced by a chatbot?”

  1. Geoff Barker says:

    Do you think the decreasing level of human to human interaction is a significant cause of the increasing demand on counselling/psychology/mental health services, and AI mental health services will create a spiral facet? And who will be making money out of this!

    • Chris Cain says:

      Hi Geoff, you make really good points. I think all the new communication-based technology that has become available over the past few years has had a mix of positive and negative impacts in this space.

      On the up side, we have been able to connect with people from all over the world and many people who may have otherwise felt isolated have found their ‘tribe’. On the down side, we are feeling more lonely, anxious and disconnected than ever.

      I suspect these new AIs could also have a mix of positive and negative effects. As to whether they will cause a further spiral of mental health, I think we need to find better ways to exist in this strange new world we have created, ways that are more positive and life affirming – I’m not sure how we do that – maybe AI are part of that, maybe not. But these are great questions to be asking I think.

      Thanks for sharing your thoughts,
      Sue

  2. Geoff Barker says:

    Effect, not facet. That corrective text for you

  3. Jane M says:

    Thanks – this article was really helpful intro for me. I really appreciate you putting it together or did you get it written by an AIBot – it will be interesting in the future to see how this area evolves 🙂 Many thanks

Leave a Reply to Chris Cain Cancel reply

Your email address will not be published. Required fields are marked *