The real ethical issues with AI for clinical psychiatry
AI is already being integrated into mental healthcare. Even mundane applications raise significant ethical concerns
I have a new article out in the International Review of Psychiatry: “The real ethical issues with AI for clinical psychiatry” (full text). This was an invited commentary that led me in a direction I might not have otherwise gone, and I was glad—it gave me an opportunity to look critically at the hype cycle du jour, I learned a few things about the state of play today, and I arrived at some tangible suggestions. I found that there is a lot of discussion about speculative, futuristic concerns, like AI bots that someday might replace therapists and revolutionize personalized diagnosis, but not much discussion of what’s actually happening in current clinical practice. I argue that there is a more urgent set of concerns about the AI applications already in use. If you are interested in the practice of mental healthcare today, I think this is an important issue to track.
In the article, I selectively review some of the curent AI tools to show the types of applications in use, suggest a set of ethical considerations, and consider related systemic and social issues—such as the system of mass surveillance as a business model and the the way AI companies are arguably “fracking” the internet’s existing informational ecosystem without a coherent plan for what will replace it.
Below I’ve chosen some excerpts so you can get a high-level view:
AI Applications for Mental Healthcare
AI applications currently in use today can be organized along major categories:
Screening and intake/data collection - an AI tool interacts directly with the prospective patient to help with on-boarding, collecting demographic information, providing information, and screening for appropriateness for care.
Documentation/”scribes'“ - drafting progress notes and in some cases
interact directly with electronic medical records.
Asynchronous decision support/sidekick - insights and guidance during the session, or ahead of the next one.
“Non-clinical” support - chatbots that offer coping strategies, mood tracking, and psychoeducation
Adjunctive treatment - explicitly approved as medical devices and used as adjunctive treatments for patients with formal mental disorder diagnoses
Examples
Screening and intake/data collection
“One prominent example is Limbic Access, an AI-based tool [approved in Britain as the equivalent of a Class IIa medical device, because of its roles in triage and diagnostic prediction] that is currently used by certain parts of the UK National Health Service (NHS). Reportedly, the application has been used by more than 40% of NHS Talking Therapies to screen more than 200,000 patients (Heath, 2023; Limbic, n.d.; Limbic Access, n.d.). The personalized self-referral chatbot integrates into the service’s website, and users who visit the site are engaged by the chatbot, which interacts with them to collect referral information… Even basic processing of the data collected may constitute an assessment of prediction, safety and risk, and level of care.”
Documentation/’scribes’
"…generate notes from recordings of mental healthcare sessions, producing a draft note for review and editing by the clinician before being finalized in the electronic medical record.”
Asynchronous decision support/sidekick
"At least one startup advertises an AI system that uses recordings of therapy sessions not only to draft session reports but also to generate ‘sidekick’-like insights and guidance….In addition to drafting a progress note, the application ‘builds a dynamic Client Story’ and makes suggestions for future psychotherapy sessions”
Non-clinical patient support
“Support tools, such as chatbots are intended to help people manage their general mental health, offering coping strategies, mood tracking, and psychoeducation. They may also provide acute support in moments of distress. However, they generally are not presented as clinical tools per se, nor are they presented as the practice of medicine, and they do not seek or possess formal approval from regulatory authorities as medical devices.”
Adjunctive treatment
"One AI-based conversational agent is FDA-approved to deliver CBT for depression and anxiety for adult patients with chronic musculoskeletal pain. Another is FDA-approved as a digital tool for the treatment of major depression.”
Ethical Considerations
One set of concerns involves the ways the boundaries between different types of applications get blurred. For example, the distinction between (non-clinical) “support” and (clinical) “care” in those last two examples is not always clear. “‘Support’ is not generic; it necessarily relies on a particular theory or model of change, such as that found in CBT, which is used to train the AI model.” This is also a slippery distinction that companies employ selectively. For example. there are chatbots that present themselves as “support” (i.e., rather than clinical care), but are also marketed extensively as relying on evidence-based approaches like CBT. This is a major issue that also relates to other startups in the mental health space.
A similar boundary issue: there are variations in what could be labeled “decision support,” with big differences between getting real-time, sidekick-like suggestions during a patient session, versus asynchronous suggestions outside of patient care. The blurred lines here have significant implications both for the actual practice of care as well as regulatory considerations.
I also raised some other concerns:
The loss of writing - “I write entirely to find out what I’m thinking.” —Joan Didion
AI tools that automate part or all of the documentation process— whether for data collection, scribing, or more far-reaching purposes—may remove an opportunity for psychiatrists to reflect on their work, potentially missing opportunities to recall crucial details, to evaluate their underlying care plan, or even to consider their own development as clinicians… Automated note writing might obscure deficiencies in a clinician’s performance and make it more difficult to identify opportunities for improvement.”
The seductive allure of A.I.
“One recent study found that people blindly trust AI, even when it’s wrong…. Indeed, there is ample psychological evidence that people’s reasoning can be swayed by a variety of irrational explanations and technology-based rhetoric, such as the ‘seductive allure of neuroscience explanations’... The seductive allure of AI is particularly relevant in a time of great AI hype, in which things are labeled as AI even when they are not. Parmar et al. reviewed 78 health-focused applications that used chatbots, and they found that 96% used fixed, finite-state designs, not actual AI or natural language processing”
Bias
Racism has appropriately received a lot of dicusssion regarding bias in clinical AI:
Prior authors have suggested that AI might perpetuate existing bias by furthering social inequities or demonstrating prejudices because of underlying systemic racism (Lee et al., 2021). This concern is particularly relevant in cases of screening and triage and must be approached cautiously..
Appropriately, AI companies are studying these effects, and some applications have been found to increase referrals among minority groups, suggesting that if AI tools are deployed thoughtfully, they might actually mitigate aspects of systemic racism and inequality.
It’s also important to keep in mind other forms of bias, in line with the different frameworks of care I’ve reviewed in previous posts:
“AI applications trained on large bodies of training data are likely to reflect the most prominent and common views of mental health care. Likewise, specific AI systems are designed using particular treatment models, which can limit their flexibility in choosing among different treatment approaches. Treatments vary by core philosophies and treatment goals..”
Data security
There are some basic questions about privacy protections, HIPAA compliance and the like. But there are also bigger questions about data stewardship regarding clinical AI:
“One major AI company is being investigated for potential harm to consumers, in part due to a large data leak that exposed users’ chat histories. Some LLM technology ‘regurgitates’ content; i.e. generating near-verbatim excerpts from training data, raising further security concerns.”
Liability
There are lots of ways introducing AI into care might open clinicians up to liability. Bottom line, concerns about liability are almost always reflective of concerns about good practice
“…more intuitively, clinicians may face liability if they follow erroneous recommendations of an AI tool. Clinicians are ultimately responsible for relying on their own professional judgment and knowledge, so they are likely to be at fault if they fail to question and override bad advice given by AI tools… Conversely, clinicians may be liable for ignoring or failing to implement a recommendation from an AI tool, a risk that may increase if the use of AI tools becomes more common”
In summary…
While media fixates on sci-fi A.I. futures, there are more mundane applications already being rolled out in mental healthcare today.
Those mundane applications have real ethical impacts on the practice of mental healthcare and the understanding of mental health more generally.
The allure of AI can lead to overconfidence and blind trust in its capabilities, all the more problematically given how infrequently what is labeled “A.I.” is indeed A.I.
As is often the case in clinical practice, the most important issue is our fiduciary responsibility to patients. The use of AI, whether to reduce administrative burden, improve access, or inform clinical care, should be primarily motivated by a desire to improve patients’ lives.
A reminder, I’m having my first live workshop on Sunday, September 8th, at 12 PM Eastern Time (9am Pacific): From Recovering to Flourishing: A Foundational Workshop, a 90-minute interactive online event.
If anything, I think these AI examples show the need for continued human contact and expertise, so join us for a group exploration.
If you have any questions about the workshop, feel to comment below or reach out to me directly by replying to this message.
Thank you for the article Carl! Some great considerations.
Like any clinical tool, it must be used with clear eyed understanding of how it works, and its limitations. At least currently developed, it is useful for clerical tasks and I find most powerfully for idea generation. Naturally, this quickly becomes a slippery slope to outsourcing your thinking, rather than thinking with it. And I think there are really valid concerns of losing thinking skills if you are "brought up with" LLM and AI so to speak - there is certainly value to the laborious task of having manually written thousands of notes and combed through charts. I don't know if this will be of increasingly marginal value though as things progress, and there is always opportunity cost for time.
I do think one thing that frequently gets missed in the skepticism is the comparison for things as they currently are. There is a presumption that clinicians do practice in a well thought out, reflective way, and while some do, many do not. Tools like these can give lots of feedback, and while it is healthy to have some diversity in style, approach, etc, it is also true that there are better ways to practice more aligned with the evidence. Mental health care has one of the wider "standards of care" and is perhaps more divorced from evidence than any other field of medicine. I have tremendous optimism for how it can improve care, especially for providers who might be below average, even when used fairly unthinkingly.
To my thinking, the biggest ethical issue with AI is the most basic: it´s not a human being. Even if AI could objectively do some tasks better it still comes up short because it´s not human and that human-to-human contact is at the essence of all therapeutic change. We should be wary of outsourcing the best of us to machines, no matter how "advanced."