It’s cheap, quick and available 24/7, but is a chatbot therapist really the right tool to tackle complex emotional needs?
Even if this was the case, it’s still not good:
“The one who controls the AI now controls you.”
Coming to an AI therapist near you:
“Consume product”
“You don’t deserve a raise. You’re lucky to have a job at all.”
“Vote for fascist dictator. He’s much better than the other options”
Also you should probably shoot the queen with a crossbow. It seems like a reasonable thing to do.
Brought to you an antidepressant maker!
“but is a chatbot therapist really the right tool to tackle complex emotional needs?”
I dunno, is Lemmy really the right place for click bait garbage?
Both Betteridge’s law of headlines and common sense say “no”
“Please drink verification can to continue emotional support for another hour”
Definitely not, but the truth is mental health support and care is needed 24/7. Good mental health care. So many support needs go unmet because the labour cost is so high.
I will say that in my own experience, Ai LLMs have been amazing with reflection and encouragement.
Does this mean good therapy? Not necessarily. I just wanted to share a positive experience.
My opinion was more a reflection that seeing a mental healthcare professional once a week isn’t really enough when people don’t have traditional support mechanisms.
What I’m trying to say is that before therapists, friends and family were therapists. They were available to give support and advice nearly 24/7
In today’s life people are too busy to do that.
It was never queer people destroying the family. It’s always been capitalism
deleted by creator
It’s not ‘programmed’ at all.
deleted by creator
Pretty much. What’s programmed is the mechanism for the model to self-supervise weighting its neural network to correctly model the training data.
We have next to no idea what the eventual network does in modern language models, and it certainly isn’t programmed.
I don’t think these therapist can do better than ELIZA, emacs’ psychologist
Do you feel strongly about discussing such things ?
Can you give a specific example ?
Eventually, yes, I think it will be. Not yet though, the tech just isn’t strong enough atm. But an AI is resistant to the emotional toll, burnout and low pay that a real life therapist has to struggle with. The AI therapist doesn’t need a therapist.
Personally though, I think this is going to be one of the first widespread, genuinely revolutionary things LLMs are capable of. Couple more years maybe? It won’t be able to handle complex problems, it’ll have to flag and refer those cases to a doctor. But basic health maintenance is simpler.
That would assume the people designing AI want what is best for the person and not what will make them the most money at the expense of the consumer.
The companies involved in AI are NOT benevolent.
You could just run your own. There are plenty of open source models that don’t answer to any company.
Why dont i just give myself therapy? I know way more about what is going on in my head than anyone else does.
Because what’s going on in your own head a would taint your treatment plan and cause it the be a self-defeating plan.
Maybe one day that’ll actually be possible.
Yes, one thing it absolutely has to be good at is referring patients to human therapists, for anyone who need something beyond the standard strategies the AI is trained on. It has to be smart enough to know when to give up.
Edit, it would also be great if the AI would match up these difficult cases to therapists who are known to do well with whatever the patient is dealing with, as well as matching according to the patient’s personality, communication style, etc wherever possible
Edit 2 for clarity above
Where is the profit in sending someone to a different AI for help?
I meant referring them to human specialists.
This is going to sound really stupid, and I should note that I am actively in therapy too.
But I had to put my dog down about a month ago, and there was a point where I just needed some validation, so I went to GPT4 and asked it some questions and told it about how I was feeling. I even fed it a poem that I wrote about her and asked if it was good.
The responses were incredibly empathetic and kind, and did an amazing job at speaking directly to the anxiety, pain, and fear I was feeling in those moments. The responses were what I needed to hear and gave me a measure of peace to get me through in those gaps when people weren’t available, or when I wasn’t able to speak them out loud. There was nothing new to me in those responses, but often times we just need to be reminded by someone or something outside of ourselves about what the truth is, and LLMs can absolutely fill that particular hole when trained properly.
My last three months in particular have been tough, and GPT4 has been a useful tool to get through a fair few storms for me.
It’s a different chat bot completely, but I will still leave this here.
Can AI (whatever you personally are an expert in) do better than the real thing?
Does it matter that it checks in on you more, when it technically isn’t someone? I don’t get how people talk to bots when they know they aren’t people.
This is the best summary I could come up with:
So one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character.
Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks.
“Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life.
For the past eight months, Melissa, who experienced childhood trauma and abuse, has been chatting every day with Zaia’s psychologist on character.ai, while continuing her work with a human therapist, and says that her symptoms have become more manageable.
“Disease prevalence and patient need massively outweigh the number of mental health professionals alive on the planet,” says Ross Harper, CEO of the AI-powered healthcare tool Limbic.
Psychoanalyst Stephen Grosz, who has been practising for more than 35 years and wrote the bestselling memoir The Examined Life, warns that befriending a bot could delay patients’ ability “to make a connection with an ordinary person.
The original article contains 2,859 words, the summary contains 213 words. Saved 93%. I’m a bot and I’m open source!
Nothing new really… https://en.wikipedia.org/wiki/ELIZA
Betteridge’s Law!