Here’s a good & readable summary paper to pin your critiques on

  • Frank [he/him, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 months ago

    Hard agree. A hallucination results from a damaged mind perceiving things that aren’t there. Llms have no mind, no perception, and a thing has to work before you can call it damaged. Llms are exploring brave new frontiers in garbage in garbage out.

    • itappearsthat@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      5 months ago

      I would not necessarily say that is true, and the article summarizes a philosophically interesting reason why:

      The basic architecture of these models reveals this: they are designed to come up with a likely continuation of a string of text. It’s reasonable to assume that one way of being a likely continuation of a text is by being true; if humans are roughly more accurate than chance, true sentences will be more likely than false ones.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      Have you actually used ChatGPT? The vast majority of the time it spits out good enough info. We use it at work frequently to write more tedious code. Ex: It’s written approximately 7 trillion queryselectors for me, and as long as I hand hold it it will do a good job.

      The biggest problem is when it comes to anything involving human safety. You also have to know that you have to hand hold it to get it to spit out something that’s more or less exactly what you intended. But if you use it to draft a custom cover letter for you it’s probably gonna do a good enough job, and it’s not like anyone is actually reading that shit. It’s great at doing basic math equations that involve a lot of conversions for me. It sure as hell aint the end all be all that every tech company seems to be pushing, but it’s sure as hell not wrong 50% of the time.

      • itappearsthat@hexbear.netOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        5 months ago

        For me it is wrong more than 95% of the time. I stopped using it because it was just a waste of time. I am not doing particularly difficult or esoteric programming work and it just could not hack it at all. Often the ways it was wrong were quite subtle. And it presents wrong answers with the exact same confidence it presents right answers.

  • jaden
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Yeah I just don’t see how it’s really any different from a human in that respect

    • itappearsthat@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Humans are capable of metacognition: having levels of confidence about the accuracy of their beliefs. They are also capable of communicating this uncertainty, usually through tone & phrasing.

      • jaden
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        17 days ago

        I suspect that arises from a sort of adversarial or autoregressive interplay btw areas of the brain. I do observe early teens displaying very low metacognition around accuracy of what they say. It’s a true stereotype that they will pick an argument almost arbitrarily and parrot talking points from online. I imagine that if llms can do that, they might just need an RLHF training flow that mirrors stuff like arguing for BS with your parents or experiencing failure as a result of misinformation. That’s why I think it’s a matter of instruction fine-tuning rather than some fundamental attribute of LLMs.

        It’s probably part of developmental instincts in humans to develop better metacognition by going through an argumentative phase like that.