Our paper in BMJ Mental Health: AI-generated imagery often amplifies harmful stereotypes about mental disorders. How models like DALL-E 3 and Midjourney shape public perceptions of mental illness
Fascinating research. It's easy to see how this can affect end users online using AI gen to self diagnose or just gain information about their disorder/symptoms, but we just need to be aware of the bias/caricature/stereotype (as you say in solution 3). imho AI will be a huge boon in medical informatics with regards to crunching the big data of online testimonials, forums, reddit, etc., to help advance the field of mental illness and access to therapy (with huge caveats of course, 'robo-therapist' is a rather dystopian concept lol).
Thanks Yann! Yes I think this may affect people going to AI for self-diagnosis or perspective, but I also worry about the more casual dissemination of stigmatizing content saturating the landscape. For example most people won't think much of using a dashed-off AI-generated image for a web page or what have you.
I do think AI will help sort through online info related to mental health, and we should always be mindful of the distinction between reflecting current levels of awareness (ie just reflecting the usual ideas) versus advancing understanding. Ie AI shouldn't be used just to keep us at the same level.
Really interesting.
Fascinating research. It's easy to see how this can affect end users online using AI gen to self diagnose or just gain information about their disorder/symptoms, but we just need to be aware of the bias/caricature/stereotype (as you say in solution 3). imho AI will be a huge boon in medical informatics with regards to crunching the big data of online testimonials, forums, reddit, etc., to help advance the field of mental illness and access to therapy (with huge caveats of course, 'robo-therapist' is a rather dystopian concept lol).
Thanks Yann! Yes I think this may affect people going to AI for self-diagnosis or perspective, but I also worry about the more casual dissemination of stigmatizing content saturating the landscape. For example most people won't think much of using a dashed-off AI-generated image for a web page or what have you.
I do think AI will help sort through online info related to mental health, and we should always be mindful of the distinction between reflecting current levels of awareness (ie just reflecting the usual ideas) versus advancing understanding. Ie AI shouldn't be used just to keep us at the same level.
100% there is a dialogue going on, we need to keep track of how it influences us.