Artificial Intelligence (AI) is by no means a new concept, but the topic of its usage in many fields seems to be on everyone’s mind. Within the last year, advancements in AI, such as Chat GPT have made claims that they are able to accurately track, analyze, and respond to mental health concerns through machine learning. The basic concept is that an individual will input their symptoms/emotional state and AI will mimic a therapist’s response with supportive phrases and tips for improvement. As you can already imagine, this is highly controversial with fierce proponents for and against the use of this technology. Articles have been published about the absolute necessity of integrating AI into the mental health field while other articles fiercely warn against the potential harm of AI utilization.
The Argument for AI Use
The pandemic steeply increased the number of individuals seeking therapy. An already burdened and limited mental health care system within the United States had difficulty keeping up with demand. We still hear stories of individuals who have waited months on waitlists or can not seem to find any therapists accepting new clients. With the use of AI individuals experiencing mental health distress would get some form of care immediately. This care would also presumably be low cost. This would be particularly relevant for those who have limited or no insurance coverage.
Another argument in favor of artificial intelligence-based care is that it is available 24/7. Individuals do not need to wait for a scheduled session to gain support. As symptoms or need for support can occur throughout the day and night, the immediacy of care is a draw.
The Argument Against AI Use
Many mental health providers and researchers have stated that the efficacy of treatment is greatly impacted by the therapeutic relationship developed between client and therapist. This healing type of relationship would not be replicated by a computer. Within this deeply human connection, mental health counselors are able to learn how to provide nuanced care to their clients. What works for one client may not work for another, and generalized mental health help may not be effective.
AI works solely on verbal/written communication of the individual. This means nonverbal cues are not taken into account. Most seasoned clinicians understand that nonverbal cues are an essential part of an individual’s clinical picture. This limitation on information gathered by AI raises concerns about its ability to accurately assess an individual. Overall, there is limited research on the efficacy and risks associated with AI-driven care. Serife Tekin a researcher in mental health ethics stated “The hype and promise is way ahead of the research that shows its effectiveness,”
It is fair to say that as AI advances, questions about how to apply it to the mental health field will continue to surface. Its ability to provide immediate, low-cost, and 24/7 support may be able to fill in some of the gaps in current care. However, in a field dealing with complex emotions that are often worked through within the context of a therapeutic relationship, AI has significant limitations. More research needs to be done on the benefits and risks of such technology in this healthcare setting.