Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
I am doubtful that this framework is going to accurately detect anything at all about the usefulness of chatbots in this context, whether about race or anything else.
I don’t think using chatbots for psychology is a good idea, but this study isn’t the way to study and make that determination.
The problem with using GPT as it is currently, you can ask it the same question 27 tomes and get 18 different answers. One of them a hallucination.