4 Comments
User's avatar
Quinn MacDougald's avatar

Thank you for the article Carl! Some great considerations.

Like any clinical tool, it must be used with clear eyed understanding of how it works, and its limitations. At least currently developed, it is useful for clerical tasks and I find most powerfully for idea generation. Naturally, this quickly becomes a slippery slope to outsourcing your thinking, rather than thinking with it. And I think there are really valid concerns of losing thinking skills if you are "brought up with" LLM and AI so to speak - there is certainly value to the laborious task of having manually written thousands of notes and combed through charts. I don't know if this will be of increasingly marginal value though as things progress, and there is always opportunity cost for time.

I do think one thing that frequently gets missed in the skepticism is the comparison for things as they currently are. There is a presumption that clinicians do practice in a well thought out, reflective way, and while some do, many do not. Tools like these can give lots of feedback, and while it is healthy to have some diversity in style, approach, etc, it is also true that there are better ways to practice more aligned with the evidence. Mental health care has one of the wider "standards of care" and is perhaps more divorced from evidence than any other field of medicine. I have tremendous optimism for how it can improve care, especially for providers who might be below average, even when used fairly unthinkingly.

Expand full comment
Carl Erik Fisher's avatar

Thanks Quinn, I basically agree on both points, though the failure/hallucination rate of LLMs (plus the privacy issues) give me pause for basic clerical tasks. I think some companies will handle the complications decently well, but amid all the hype, there are probably a lot of other hastily deployed "wrappers" that don't do as well.

As for the status quo, totally agree as well. We should not let the perfect be the enemy of the good, at least in an absolute sense. Especially in addiction psychiatry (but also in general psychiatry and mental health care) there is an enormous quality chasm between even the most basic evidence-supported practices and actual practice. If AI tools can help us cross that chasm, some amount of flattening and homogenization of care might be an acceptable tradeoff. For example, just following basic medication recommendations and guidelines. I would be excited to see reliable sidekick-like feedback and insights developed for that. Eg, "my patient has not adequately benefited from medications X + Y, what are others to consider with tradeoffs?" Then again, I would ask, do we / should we really need AI to help us cross that chasm? Will it just be a band-aid on a deeper problem of burnout and insufficient continuing education, dysfunctional systems, etc? Not a reason not to work on these types of issues. But worth considering the full scope.

Thanks for writing, in the end I too am optimistic but also curious to see how exactly it will prove helpful

Expand full comment
Becoming the Rainbow's avatar

To my thinking, the biggest ethical issue with AI is the most basic: it´s not a human being. Even if AI could objectively do some tasks better it still comes up short because it´s not human and that human-to-human contact is at the essence of all therapeutic change. We should be wary of outsourcing the best of us to machines, no matter how "advanced."

Expand full comment
Carl Erik Fisher's avatar

I think that is a concern for more advanced clinical AI. Brian Christian in The Alignment Problem (a 2020 book) has a number of striking examples of unintended behavior and non-human actions that impossible to just "program out" of AI

Expand full comment