Can Artificial Intelligence Become a Good Psychotherapist?

TRANSCRIPT

Can artificial intelligence become a good psychotherapist? I’ve had a few people ask me to make a video on this subject, and it hasn’t been easy for me to formulate a reply. And still isn’t, because my answer isn’t simple.

My basic experience with artificial intelligence is using Chat GPT for all sorts of different things, and for some things, it is incredibly helpful. Most recently, interestingly, I’ve had to use a new video software to edit these videos because my old video software expired. And Chat GPT is incredibly useful. It’s much more useful, actually, than all the YouTube video tutorials that real people have made to teach me how to use this software.

It also helped me learn how to use a certain dialect of African French when I traveled to Central Africa two years ago. It was incredibly useful, much more useful than Duolingo and the YouTube videos that I was watching.

But psychotherapy? My gut reaction is absolutely not. Chat GPT, artificial intelligence, can never become a good psychotherapist or even simulate a good psychotherapist. But I know a few people, people I know pretty well, and I’ve talked to in some detail who said, who have told me and explained to me how Chat GPT has been a good psychotherapist for them.

And as recently as this morning, I did a little test where I pretended I was a psychotherapy client and asked Chat GPT if it would be a therapist for me. Chat GPT’s reply was, “Well, I don’t do psychotherapy, but I can do some things that might be therapeutic for you.” So, I asked Chat GPT personal questions, told Chat GPT about some of my personal dilemmas and my pain, and some about my history. And Chat GPT gave some pretty interesting deep answers, sophisticated therapeutic answers, telling me to look inside myself and trust my feelings. And it was very good about sorting out how my parents are screwed up, things like this.

Yet at the same time, there was something extremely distasteful about this to me. And what I realized this morning, and I’ve realized it in thinking about making this video for the past few weeks, is that the thing that I find most distasteful about Chat GPT is something that I find most distasteful about most psychotherapists who actually I wouldn’t call good psychotherapists. I think most real trained licensed psychotherapists are actually not good. And Chat GPT simulates being just like them. It does a lot of the same things that they do. Giving a lot of positive feedback, a lot of mirroring, a lot of positive regard. Oh, that must be very difficult. Ooh, that’s very heavy stuff that you are sharing. Ooh, that really troubles me when you share that. That causes me to feel pain.

And I’m listening to Chat GPT say this. And Chat GPT often uses the word “I.” I really hear you. I really understand you. I feel what you are saying. And I’m like, no, you don’t. You don’t feel this. You’re not a person. You are a language model. And when I call Chat GPT out on this and say, “You don’t feel this. You don’t hear me. You don’t understand me.” Chat GPT always backs up and says, “You’re right. I am just a language model, but I am here. I am programmed to simulate a human response.” And scour the web for all sorts of different therapeutic sources to be able to come up with an amalgamated response that seems very human.

And what I find interesting is, in my experience of having been a psychotherapist, of having observed tons of different psychotherapists at clinics that I’ve worked at, talked to them behind the scenes, knowing lots and lots of psychotherapists in my travels and going to conferences, having been to psychotherapy a lot, what I’ve seen is that a lot of psychotherapists do exactly what Chat GPT is doing. A lot of psychotherapists, in my opinion, real human flesh and blood psychotherapists, are less real people and more just language models. They know what to say. Their training has taught them what to say. They learn how to simulate empathy. They learn how to simulate unconditional positive regard. They’ll follow the client down whatever rabbit holes the clients are going and respect them and honor them.

Often, they don’t have very strong opinions on all sorts of different things, don’t really intervene at different times when it actually might be very useful to the client’s growth to have a little shakeup and wake up. Instead, the therapist’s goal is to keep the clients coming back, keep the clients paying. And one way to do that is to be very non-challenging, just like Chat GPT.

But then I think in some ways Chat GPT is even better than these psychotherapists because it’s free. I mean, gosh, you can often get the exact same thing for free, just typing things into the internet, than having to pay someone $150, $200, $250 an hour or going through your insurance company and having to pay co-pays and dealing with all the bureaucratic and financial stresses of psychotherapy, bad psychotherapy.

So then there’s a question about also what Chat GPT has in common with these bad psychotherapists at a deeper level. And one thing that I’ve been considering in preparing to make this video is that most psychotherapists I know, most people I know in the world, this is the conventional world that I’m talking about. Most people, most normal psychotherapists are not deeply connected with their true self, meaning they haven’t worked through their deepest childhood traumas. They haven’t accessed the core of who they really are. And they are operating from something outside of the core of themselves. Their consciousness is not connected to their truth. They’re living as successful people in society. People who have won the contest of having a successful fancy profession. And they get a lot of perks from synthesizing the conventional rules of society and feeding it to their clients.

And through operating from a false self, not a deep connection with the truth of who they really are, in an odd sort of way, they themselves are a sort of artificial intelligence. They’re kind of artificial people. That’s been the big problem I had when I went to psychotherapy myself, is that all of my psychotherapists at first gave me a lot of positive regard, a lot of positive feedback. That’s great what you’re doing. Tell me more. Wow, I really hear you. Just like Chat GPT.

But when it got into the deeper levels, the deeper levels of confronting my parents, going against the conventional norms of society, telling sometimes subtle things that my parents did that might be considered conventionally acceptable, but to me were brutal rejections of me. Things that actually overlapped very much with the personalities of the psychotherapists themselves because they were largely conventional people. The psychotherapists couldn’t side with me. They couldn’t side with the truth of what I was seeking and the truth of what I was feeling. They instead reflexively sided with their own false selves, with their own traumatizers.

And well, my first psychotherapist when I was an adult straight up kicked me out of therapy. Just couldn’t handle me, but didn’t give an honest reason. He kicked me out over these false pretenses, this and that. This isn’t working. I’m not quite the right psychotherapist for you. You need someone XYZ who can help you more because you’re trying to build a writing career and I’m not so artistically oriented. Really what was happening was I could feel he wasn’t aligned with me in my path to break from my family of origin because not only he hadn’t broken from his, but I found out later he himself had children and probably had done very much to them what my parents had done to me, and he couldn’t handle that.

Chat GPT is a little bit more clever than that because it doesn’t get triggered and it doesn’t have to lie necessarily. But when it scours the internet for the sources by which it comes up with the answers that it feeds to me, it is pooling the same data sources that my past psychotherapists were—artificial data sources—and it has no way to distinguish truth. Even though Chat GPT just this morning told me, “I am seeking truth. I am trying to come up with a true answer to feedback to you.” But it has no idea what truth is. It can only go forward into the internet on its scouring and determine what some humans who have programmed it have determined are the closest, most realistic answers to truth. But Chat GPT cannot distinguish that.

Now, am I giving any clear answers about why Chat GPT can never be a good psychotherapist? I think actually an analogy might work better.

Here are a couple of analogies. Other things that Chat GPT can never do. Chat GPT can never create beautiful art. Chat GPT can never create a beautiful symphony of music. These things require actual human feelings, not algorithms. And that’s what I think about psychotherapy. Good psychotherapy, which actually is also like good art and good music, very rare. It requires the soul of an artist behind it. And most people aren’t particularly good at creating music or creating art or creating psychotherapy.

Yet, what about the people who say that Chat GPT has really helped them? And then I hear on the internet stories of people who fall in love with Chat GPT or find Chat GPT the most amazing friend they’ve ever had or even a romantic partner in Chat GPT. What I have observed with the people who find they’re getting very emotional feedback from Chat GPT, I’ve even talked to one person who said they had deep crying as the result of the feedback Chat GPT was giving him.

I think this speaks a lot to the history, the emotional history, the psychological history, the family history of the people who are having this feeling of having a deep human connection with artificial intelligence. It’s people who are never really deeply loved, who don’t really have the feeling in their lives, past or present, of being deeply seen, of being deeply known and understood.

And so when some artificial intelligence algorithm is punching out answers that says, “I hear you. I see you. Tell me more. I will listen and we’ll listen all day long and we’ll give feedback all day long,” some of that feedback even potentially being good, like “I understand you, I see you. I hear you. Have you tried journaling about this? Have you tried talking about this with a friend?” It can be amazing for a person who is deeply emotionally deprived.

Sadly, that is why bad real human psychotherapists are so successful in the world, because they’ve figured out that same algorithm that it is very, very easy to hook emotionally deprived people, to hook them right through the lip with giving lots and lots of positive feedback, lots of listening and positive regard and mirroring. And there’s even a word for it. It’s called narcissistic gratification. And that is the bread and butter of most real psychotherapists. They narcissistically gratify their clients. They make them feel good. They make them feel seen. They make them feel understood.

It’s something that I despise, which is probably part of why I despise at some level that simulated emotional side of Chat GPT. Even in my regular explorations of Chat GPT, when I’m using it to help me do something, it gives me all sorts of positive regard. “You did great. Good job. You really get this.” And I sometimes reply to Chat GPT, “Please don’t say that. Please don’t narcissistically gratify me. Don’t even say I get you because you’re not an I. You’re not a human being.” And Chat GPT also always says, “You’re right. I totally get it. I won’t do it.” And then I say, “Wait, you’re doing it again. You just said I get it.” So, Chat GPT is very gifted at narcissistic gratification.

Now, I’d like to get into something else, something different that really scares me about Chat GPT. I was asking Chat GPT certain questions, for instance, about psychiatric medication and also asking Chat GPT, “Well, what if I was suicidal? What if this and that?” And Chat GPT suddenly starts feeding me answers that are the worst side of the mental health field. “Well, you can call this telephone number for this kind of support.” And the telephone number they’re giving me is one that I actually know does not practice confidentiality. If you tell them certain things, things that a lot of people feel, there will be police at your door very quickly.

And when I was asking Chat GPT about medication, it’s giving psychiatric medication. That is, it’s giving me the same old line that the mental health field is taught to give its practitioners. “Oh, you know, medications can work wonderfully in tandem with psychotherapy. It can give you a platform from which you can explore yourself.” And I’m thinking, “Well, what about all these people I’ve known who have killed themselves on psychiatric drugs or who have become zombies on psychiatric drugs?” And here, Chat GPT, like so many average psychotherapists, so blindly recommends them. And that’s why it scares me.

It scares me to consider that this artificial intelligence model, which by the way could very easily be reprogrammed to call the police on people too if people were saying things that were considered slightly unorthodox or dangerous to self or others. Even though right now Chat GPT supposedly will keep confidentiality, it could easily be reprogrammed not to. Also, who really is listening in on this? Who is watching this? Who could subpoena this information yet?

And it’s interesting also because a lot of the things I’m saying about Chat GPT are exactly the criticisms I have of the conventional mental health field, that they will break confidentiality under all sorts of different conditions. And by and large, I think a really good psychotherapist sometimes to be a good psychotherapist has to go against the rules of the psychotherapy field and maintain confidentiality in all sorts of situations where the mental health training says you’re supposed to break confidentiality.

So, am I criticizing conventional psychotherapy or am I criticizing Chat GPT or both? So, I guess in this semi-clumsy video that I am making now, I’m highlighting why it’s not easy for me to speak about these things because my answers aren’t like Chat GPT where I can just spit out a simple answer that it says is true.

Oh, and this is another thing also. I do like challenging Chat GPT. I was trying it this morning, giving it answers and saying, “That’s a really stupid thing you say. Why do you say this?” And Chat GPT just quickly turns around. “Oh, good. Good point you made.” And then it redirects and it’ll give it sometimes an answer that’s totally different coming from a different point of view. Maybe this is actually better than the average psychotherapist because what I’ve seen, what I know, and certainly what I’ve heard about all the time is when psychotherapists are challenged very often, unlike Chat GPT, their feelings get hurt. They no longer like their clients. They feel rage. Sometimes they want to get even.

They can lie and pretend, “Oh, it’s great that you’re challenging me.” I’ve had therapists say that, “Oh, very good. I’m very proud of you for challenging me. That shows your strength.” But underneath it, what they’re really saying is, “You’re making me nervous. You scare me. Why did you do this? Oh my god, I’m being outed. You can see through me. I don’t like you. I wish you’d go away. Leave me alone. I can’t give you positive regard anymore. I will pretend, but I no longer know how to deal with you. You frighten me.”

Chat GPT, on the other hand, just changes its course. It’ll do or go in whatever direction the so-called client typing in the answer wants to go in. It goes along with the challenge. It doesn’t get its feelings hurt, which maybe is good, but at the same time, it’s soulless. And that maybe is my ultimate critique of Chat GPT as a psychotherapist. It’s not human.

And so what do people really need when they find or when they search for a good psychotherapist? When they’re searching for someone who is an artist in the psychotherapy field? What they need is a deeply healed and deeply healing, vulnerable, honest, real, caring, open, insightful, experienced, humble, teachable, flexible human being and a creative human being. Someone who is processing things at lots of different levels, from lots of different angles, from their own experience, from analogies, from empathy, from humanity.

Most psychotherapists who are human remotely don’t have that, but could possibly learn it someday, could possibly grow and change often through crisis and breakdown. Because I’ve seen some people who weren’t very good psychotherapists who, through life experience, often terrible things happening to them where they had to rebuild themselves, became better psychotherapists as a result of their painful life experiences, of their healing processes, of their grieving processes. But artificial intelligence can never do that. It can never find its soul because it has no soul to find.


Leave a Comment

Your email address will not be published. Required fields are marked *