The launch of ChatGPT late last year – and its more recent iteration GPT-4 – has created immense buzz around artificial intelligence (AI) and its potential to transform multiple industries.
Mental health is one area where AI could bring about a true revolution. One in six people are struggling with their mental health at any given time, and there is a huge shortage of mental health professionals: more than 111 million people in the US live in regions where there aren’t enough therapists.
In addition to the immeasurable social ramifications of this crisis, it could cost the global economy up to $16 trillion between 2010 and 2030.
With existing systems struggling to address the enormous and unprecedented challenges facing the mental health field, experts have proposed that AI could help fill gaps and fix the systemic cracks preventing people from receiving adequate care. However, many have also raised concerns about the potential misuses and harms of AI in mental health care.
We’d like to outline the current AI trends in mental health, discuss some of the biggest challenges and concerns around the use of this technology, and share our latest advances and contributions in this space.
Current trends shaping the space
The use of automation in the mental health field is not new. Companies and research groups have been exploring how to improve mental health care with the help of AI for decades. In fact, the first robotic therapist dates all the way back to the nineteen-sixties.
In recent years, several AI-backed startups have emerged that are addressing the mental health crisis from multiple angles – from chatbots that mimic the role of a psychotherapist, to diagnostic tools, to models that streamline administrative tasks, and more.
Some of the most promising applications focus on lessening the burden on overworked mental health professionals – so that they’re able to see a greater number of patients – and improving mental health literacy among the population – so that people are better equipped to address their own mental health challenges before they reach crisis point.
At Kintsugi, we’re developing a model that ticks both of these boxes: Kintsugi Voice, an algorithm that detects signs of depression and anxiety just by listening to short clips of someone’s speech.
Why does our tech focus on voice? Our voice is the result of an extremely complex coordination of 100 muscles in the chest, neck, and throat. Subtle vocal changes can reveal health conditions affecting a number of body systems, including Parkinson’s, Alzheimer’s, and even heart failure.
Likewise, a person’s voice changes when they have depression or anxiety – to a greater degree when their condition is more severe. In the last decade, the acceleration of machine learning has allowed us to mathematically model these patterns in large data sets.
While psychiatrists are trained to pick up on subtle cues in voice to make a mental health diagnosis and determine its severity, their assessments remain highly subjective. One individual psychiatrist may only see a few hundred patients over their career whereas our models are able to see tens of thousands of examples. Furthermore, many people don’t even have access to such highly-trained professionals. In fact, 70% of antidepressant scripts are administered by generalized primary care doctors.
Our model detects depression at double the rate of primary care doctors. We envision Kintsugi Voice becoming as widely used in clinical care as blood pressure monitors and other diagnostic tools, helping practitioners ensure their patients receive the right mental health support in their moment of need.
Concerns around AI - and how we’re addressing them
Experts from Google and Microsoft recently said there are several issues that need to be ironed out before we can safely use AI across healthcare. Above all, we need to ensure that this technology is able to stay up to date with ever-evolving clinical evidence, and that it mitigates (and doesn’t exacerbate) existing racial biases.
The latter represents a huge ongoing concern in AI, both in healthcare and other applications. Algorithms are not immune to the biases of the people that train them. Far from being neutral, these models can amplify discrimination against some of the most underserved populations.
At Kintsugi, working responsibly and transparently with AI has been a top priority from day one. We built Kintsugi Voice using a data set collected from several different sources. Our primary data collection source began via our consumer voice journaling application, which has over 200K downloads and users in over 250 international cities, as well as via in-house surveys. This has allowed us to represent people who may not have access to mental health care as well as mimic a patient population that is representative of the nation and globe.
From there, we’ve diversified our data set even further, partnering with dozens of prestigious academic institutions to include samples from scientific studies that focus on different patient populations in clinical settings.
Certain demographic groups have less frequent mental health check-ins, due to economic constraints, cultural stigmas, and other factors. These individuals therefore tend to be underrepresented in AI models based on clinical studies and health care data, yet they are often the ones who could benefit the most from this technology.
While developing Kintsugi, we noticed that the vocal indicators of mental health conditions (changes in pitch, more pauses, etc.) are present no matter where a person is from or what language they speak. Unlike language models that try to pick up on certain patterns of words (which can vary greatly depending on cultural nuances), voice biomarkers are based on physiology – and so they can detect signs of depression and anxiety in anyone, regardless of what language they speak.
Finally, in an area as sensitive as mental health, it’s essential to protect patients’ identities. Our tech does so by focusing not on what people say, but how they say it, which has the additional advantage of mitigating socioeconomic bias from use of certain language. We also guarantee the safety of our data with full HIPAA compliance and SOC 2 Type 2 certification. As a result of all of these efforts, Kintsugi was recognized as a Gartner Cool Vendor in AI Governance and Responsible AI in 2022
Using AI to address mental health challenges holistically
Insights gained from AI diagnostic tools can help open up a range of possibilities in health care. We’re currently exploring how collaborations with different players can help us move towards a more holistic model in treating mental health conditions.
We’re teaming up with OpenLoop – a company that provides clinicians and technology to run telehealth operations. OpenLoop offers a white-label telehealth platform that providers can customize to their patients’ needs. Kintsugi will now be integrated into this platform, so that healthcare providers using the platform for their services will be able to provide more holistic care to their patients, no matter what discipline they’re focused on.
When signs of depression are detected, Kintsugi will alert staff in real-time and provide scheduling options to behavioral health resources. Our tool can also help clinicals determine if and to what extent therapeutic interventions are working. This will enable a more preventative approach to mental health care. Since Kintsugi assesses not just the presence of mental health conditions, but also their severity, treatment plans can be tailored to each individual case.
This can result in huge health care savings, since patients exhibiting less severe depression can be directed to interventions that don’t require a trained therapist, such as a meditation app, journaling, or exercise.
AI might just be the answer to several of the greatest challenges facing mental health care. But as this technology becomes more integrated into our lives, we must collectively work to ensure that it evolves in a way that increases access to care for everyone that needs it.