Discussion about this post

User's avatar
Quinn MacDougald's avatar

Thank you for the article Carl! Some great considerations.

Like any clinical tool, it must be used with clear eyed understanding of how it works, and its limitations. At least currently developed, it is useful for clerical tasks and I find most powerfully for idea generation. Naturally, this quickly becomes a slippery slope to outsourcing your thinking, rather than thinking with it. And I think there are really valid concerns of losing thinking skills if you are "brought up with" LLM and AI so to speak - there is certainly value to the laborious task of having manually written thousands of notes and combed through charts. I don't know if this will be of increasingly marginal value though as things progress, and there is always opportunity cost for time.

I do think one thing that frequently gets missed in the skepticism is the comparison for things as they currently are. There is a presumption that clinicians do practice in a well thought out, reflective way, and while some do, many do not. Tools like these can give lots of feedback, and while it is healthy to have some diversity in style, approach, etc, it is also true that there are better ways to practice more aligned with the evidence. Mental health care has one of the wider "standards of care" and is perhaps more divorced from evidence than any other field of medicine. I have tremendous optimism for how it can improve care, especially for providers who might be below average, even when used fairly unthinkingly.

Expand full comment
Becoming the Rainbow's avatar

To my thinking, the biggest ethical issue with AI is the most basic: it´s not a human being. Even if AI could objectively do some tasks better it still comes up short because it´s not human and that human-to-human contact is at the essence of all therapeutic change. We should be wary of outsourcing the best of us to machines, no matter how "advanced."

Expand full comment
2 more comments...

No posts