Gen AI in Mental Health
For years, Apple has led the way in physical fitness tracking through its Apple watch series. Now, they have raised the bar with mental fitness as well through their ‘State of Mind’ feature introduced in late 2023[¹]. The watch will now detect mood shifts and gently nudge you to understand and record what led to your mood shift. This journaling can be used to gain better tracking of your emotions in the long run!
As technology is becoming increasingly part of everything in our everyday lives, using it for mental health is a natural and inevitable progression. Moving beyond tech, Generative AI (GenAI) is starting to gain traction. It is no longer a thing for the future; it’s clear that IT IS our future. People who care about mental health will be doing a disservice if we don’t consider and integrate GenAI into our therapies.
In India, one in 20 people suffer from depression, and 39% of them have severe depression that needs careful attention. In the age group 18 and above, 0.9% are at high risk of suicide. There is a HUGE treatment gap ranging as high as 86.3% going without any treatment [²]! Why is there a gap?
Many of these people do not even know that they may have mental health concerns
People who do realise it may not reach out for help due to the stigma surrounding mental health
The cost of therapy might be too prohibitive for some
Or they have previously had a bad experience with therapy that prevents them from reaching out for help again
In a country like India, where the population is higher, distance is longer, and stigma is stronger, the use of GenAI might have a lot of benefits, especially in early assessment and proactive intervention. Knowingly or unknowingly, some people have already started using GenAI. They may fall into one of these four categories [³], and we explore the potential advantages and disadvantages in each case.
1. People who knowingly use public GenAI to understand their symptoms and get mental health advice
For example, people might be consciously asking ChatGPT, Bard or Claude about their symptoms.
Adv: Users are aware of both their mental condition and what they are doing. They are making a conscious choice in using GenAI tools. If they do not see improvement, they may go for further help from a counselor.
DisAdv: GenAI may not understand the seriousness of the situation or provide incorrect advice that are not relevant to this person’s background. If multiple mental health concerns are suggested as possibilities, users might end up self diagnosing themselves.
2. People unknowingly use GenAI to understand their symptoms and receive mental health advice.
For example, people use Google to search for symptoms and read Gemini’s response without realising it’s an AI generated response.
Adv: Users are aware of their mental condition and have some level of understanding that they need to seek help. It’s better they get some relevant information than no information at all.
DisAdv: Since it’s a public LLM that’s used, the advice will not be specific to this person and the user is likely to not even think about the relevance while impacted by the advice given.
3. Use a mental health app that is known to be Gen AI-based.
Users may sign up for apps like Woebot or Wysa that are popular and known GenAI-based tools and offer personalised mental health support.
Adv: Users are not only aware of their mental health, but they are also aware that they need to work on the path to recovery with a treatment plan. These apps are likely to be more cautious about data privacy and advice given may pertain to the individual.
DisAdv: Too much dependency on a non-human entity can further worsen some symptoms and can push users over the edge. In certain situations, like loneliness or social anxiety, reliance on such tools may alienate users further from human interactions, worsening the symptoms.
4. Use an app that does not make it public that it uses GenAI behind the scenes.
Since COVID times, many self-help apps provide meditation, journaling, and emotional regulation advice. These range from free to minimally paid options that a user may sign up for.
Adv: Users are getting some advice, and some of it may even be relevant to the person’s background and symptoms.
DisAdv: Not disclosing the use of GenAI is ethically and morally wrong. Apps such as these may mishandle the person’s data, even sharing them with third-party advertisers breaching privacy.
With such a mixed bag of good, bad and ugly, how can a layman, who are already concerned about their mental health, choose whether to use GenAI or reach out to a therapist? Just so you know, millions and millions of people are already using GenAI for mental health, knowingly or unknowingly [⁴]. So, it’s time we do not even consider whether to use GenAI but start creating some checks around its usage.
According to the US Department of Health Policy and Management [⁵], here is a list of things we need to urgently do. All the below points are of equal importance and must be done together for maximum impact –
Promote the use of AI for proactive screening and early detection of mental health disorders under professional review
It’s very clear, given the statistics around high prevalence of mental health concerns and the lower number of people who actually reach out for help, that we need to provide an automated means of early warning systems. GenAI could help a lot in early detection if integrated with general doctors, primary care centres, schools, workplaces, etc.
Just like how our weight, height, blood pressure, etc, are regularly monitored, emotional symptoms can also be captured in hospitals or primary care centres to have a personalised GenAI work on the data to trigger an early warning system for any perceived mental health concerns. This can then be used as a referral to consult a psychologist.
Schools and workplaces may have audio, video based proactive tracking or self-reported symptoms based GenAI system that also triggers an early warning for any behavioural changes indicating mental health concerns. The main thing here is to link it back to a professional to validate the findings and next steps.
Build LLMs that take cultural, social and economic contexts while monitoring mental health
Results may be skewed if the models are not trained correctly to be contextual. Most of the data in publicly available models are from developed nations such as the US, UK or European nations. Models must be trained for developing nations such as India separately.
What might be the impact of unemployment in developed nation might not be the same in India, as unemployment benefits does not exist here. Therefore, the fear of losing a job or the emotional turmoil after a job is lost may be more severe in cases of Indian audience.
Similarly, in close-knit units such as Indian families, adverse effects of incorrect GenAI use might be spotted sooner than in countries where individuals tend to live alone or have limited contact with family. These situations must be embedded in the model for better and relevant output.
Guard Rails and Laws should be created for usage of AI and data privacy
If we are going to promote early screening in hospitals, schools and workplaces, stringent data protection laws must be created so that personal data does not end up in the public domain or is not revealed without the client’s consent. That is the only way to build trust among individuals that they can sign up for the early warning system.
The LLMs must be protected from data contamination, so that the model stays relevant and useful for the context it was built for. Therefore quality of the data that goes into these models must be verified closely and regularly. All AI based mental health apps must be subject to stringent quality checks before they are released to the public.
Apart from data privacy, data security is also of utmost importance so that the data does not end up in the wrong hands through cyber attacks or data breaches.
Raise awareness on the proper use of GenAI for mental health
Special campaigns must be run to ensure that people are aware of the tools they are using. They need to be taught to look into the relevance of the LLMs on which their AI is running, check for research data or results published by applications that promote self-help or assisted help for mental health and the negative impact of over reliance on GenAI for therapeutic help.
Provide GenAI based tools for mental health professionals
If we are going to promote GenAI for mental health to the masses, it is only appropriate that mental health professionals use GenAI-based tools themselves. GenAI-based tools help MHPs see the patterns behind their client’s symptoms, monitor their treatment progress and even suggest treatment plans must be mandated for therapeutic use.
The future is here, and if we take the right action now, we can make a lasting impact on the mental health of billions of people. Are we taking the right actions?
References
[1] Mark Travers (2023, September 18), The Science Behind Apple’s ‘State of Mind’ Feature, Explained By A Psychologist, Forbes.
[2] 75th Ministerial Roundtable (2022, September, 6), Addressing Mental Health in India, WHO.
[3] Lance Eliot (2023, November 2), Generative AI for Mental Health Is Upping the Ante By Going Multi-Modal, Embracing E-Wearables, And A Whole Lot More, Forbes.
[4] Lance Eliot (2024, May 20), Emerging Impacts Upon Population Mental Health Due To Widespread Use of Generative AI, Forbes.
[5] Catherine K Ettman, Sandro Galea (2023-01), The Potential Influence of AI on Population Mental Health, JMIR Mental Health.