It’s my favourite AI story as of late, in all probability as a result of it captures so effectively the chaos that may unfold when individuals really use AI the best way tech corporations have all however informed them to.
As the author of the story, Laurie Clarke, factors out, it’s not a complete pipe dream that AI may very well be therapeutically helpful. Early this yr, I wrote in regards to the first clinical trial of an AI bot constructed particularly for remedy. The outcomes have been promising! However the secretive use by therapists of AI fashions that aren’t vetted for psychological well being is one thing very totally different. I had a dialog with Clarke to listen to extra about what she discovered.
I’ve to say, I used to be actually fascinated that folks known as out their therapists after discovering out they have been covertly utilizing AI. How did you interpret the reactions of those therapists? Have been they making an attempt to cover it?
In all of the circumstances talked about within the piece, the therapist hadn’t offered prior disclosure of how they have been utilizing AI to their sufferers. So whether or not or not they have been explicitly making an attempt to hide it, that’s the way it ended up wanting when it was found. I feel for that reason, considered one of my principal takeaways from writing the piece was that therapists ought to completely disclose after they’re going to make use of AI and the way (in the event that they plan to make use of it). In the event that they don’t, it raises all these actually uncomfortable questions for sufferers when it’s uncovered and dangers irrevocably damaging the belief that’s been constructed.