Sister was messing around with a UV light and noticed this on her phone screen under it. My phone does not have this (all I get is a grid of dots I’m pretty sure are to do with the touch screen).

  • treadful
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 days ago

    You can never trust a factual response from an LLM. Plain and simple. It’ll answer with confidence whether the information it comes up with is true or false.

    Commeters presenting its answer as fact is not helping a discussion based on finding the answer.

    • hemmes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      No, you shouldn’t blindly trust whatever a chat bot outputs. You have to set your expectations correctly with an LLM. You have to learn and practice how to prompt to make the best of the utility of an LLM.

      Understanding that an LLM is best at sorting data is the first step. A simple example is my use case from the other day: I was making a table for my company’s 2025 holiday schedule. We base our holidays on our local union holiday schedule. Currently, the union has the 2024 schedule posted on its webpage. I took a screenshot of the schedule which was listed as

      Holiday Date Day Christmas December 25 Wednesday

      And so on for the 10 or so days.

      I uploaded the screenshot JPG and asked ChatGPT to format the list in the JPG as a table. It quickly gave me a nicely formatted text table of the 2024 holiday schedule from the image’s data. I then asked it to update the table data for 2025 dates and days and it did so easily. I verified the days were correct - they were - and copied the table onto my word letterhead and posted to our SharePoint site. It was very useful - a simple example.

      You need to take everything with a grain of salt when it comes to LLMs and really understand what the LLM is and how it works. Set your expectations correctly and it can be a very powerful utility.

      It’s unfortunate that folks just rage out at the sight of LLMs, maybe because they had a bad experience themselves. I think people want it to be a Jarvis and it’s just not that. It feels like you can just talk to it and it’ll just understand and give you the right answer but it won’t. It has to reply with something that it rationalizes as the most likely answer; which words should I output that are most likely what the user wants to see? This is why most output sounds like it’s “fact”. But it doesn’t know from fact, only how to sort data.

      So, yes, you should never blindly trust an LLM output, but you can practice how to prompt, and really ask yourself what do I need from my unsorted data that I’m feeding this chat bot? Am I giving it enough data to sort through? Because if you don’t prompt with enough data it will fill in the blanks as best it can and that may result in something totally different than what you expected.

      • treadful
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        tl;dr

        FWIW, I never said LLMs were useless. I just said you can’t trust its output. Go ahead and use it to narrow down your search for the facts but if you cite it as fact I’m going to downvote you.

        • hemmes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          No worries. I just like having conversations with others about tech I’m in to