Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.

    I am doubtful that this framework is going to accurately detect anything at all about the usefulness of chatbots in this context, whether about race or anything else.

    I don’t think using chatbots for psychology is a good idea, but this study isn’t the way to study and make that determination.

    • rumba
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      The problem with using GPT as it is currently, you can ask it the same question 27 tomes and get 18 different answers. One of them a hallucination.

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Chat bots are already racist. They just have to let it run wild.

        • Possibly linux
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          You are so entirely mistaken. AI is just as biased as the data it is trained on. That applies to machine learning as well as LLMs.