• protonslive@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!

  • Hiro8811@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    6 days ago

    Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

    • shortrounddev@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude

    • bromosapiens@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    3
    ·
    7 days ago

    You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.

    I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

    • Final Remix@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      7 days ago

      I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.

      Legit, being able to say “I want these questions. But… not these…” and get them back in a moment’s notice really does let me say “FUCK it. Pop quiz. Let’s go, class.” And be ready with brand new questions on the board that I didn’t have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer’s block, with a “yeah, and—!” machine. If for no other reason than saying “uhh… no, not that, NAI…” and then correct it my way.

    • DarthKaren@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 days ago

      I’ve spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.

      AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: “the dawn brings the warmth of the sun, and awakens the world. So does your trial begin.” He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.

      I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).

      I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.

      • SabinStargem@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn’t understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.

        • DarthKaren@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.

          I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn’t, but gave me reasons that would allow exceptions).

          What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.

          I tried the distilled locally. It didn’t even realize I was asking it to DM. It just repeating the outline of the campaign.

          • SabinStargem@lemmings.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful. :)

            • DarthKaren@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              I’m anxious to see it as well. I would love to see something like this implemented into games, and focused solely on whatever game it’s in. I imagine something like Skyrim but with a LLM on every character, or at least the main ones. I downloaded the mod that adds it to Skyrim now, but I haven’t had the chance to play with it. It does require prompts for the NPC to let you know you’re talking to it. I’d love to see a natural thing. Even NPCs carrying out their own natural conversations with each other and not with the PC.

              I’ve also been watching the Vivaladirt people. We need a 4th wall breaking npc in every game when we get a llm like above.

    • Bigfoot@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.

      But I agree that relying on it to think for you is not a good thing.

  • dill@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    6 days ago

    Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.

    It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      6 days ago

      I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers

      • dill@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        6 days ago

        Exactly. It’s great for that, as long as you know what you want it to say and can verify it.

        The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.

        It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.

          • dill@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.

            If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              It’s such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.

              Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.

              Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.

              • dill@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                I completely agree - I personally love that there’s so many Open Source AI tools out there.

                The scary part is (similar to what we experienced with DeepSeek’s web interface) that its extremely easy for these corporations to manipulate, or censor information.

                I should have clarified my concern - I believe we need to revisit critical thinking as a society (whole other topic) and especially so when it comes to tools like this.

                Ensuring everyone using it, is aware of what it does, its flaws, how to process its output, and its potential for abuse. Similar to internet safety training for kids in the mid-2000s.

  • arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    6 days ago

    Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

  • Mervin :)@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    7 days ago

    Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance> 😀

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      7 days ago

      Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

      And yet I doubt Copilot will be going anywhere.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      6 days ago

      Yes, it’s an addiction, we’ve got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

      Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

      And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can’t have that,!

      Example

      Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

      Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

      Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

      The Illusion of Open-Mindedness: The Mote and the Fortress

      In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

      His approach can be broken down into two key areas:

      The Mote (The Appearance of Openness)
      
          Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).
      
          Acknowledges complexity and the difficulty of absolute truth.
      
          Concedes minor details, appearing intellectually humble.
      
          Uses Socratic questioning to entertain alternative viewpoints.
      
      The Fortress (The Core That Remains Unmoved)
      
          Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.
      
          Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.
      
          Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).
      
          Rarely revises fundamental positions, even when new evidence is presented.
      

      While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

      Examples of Strategic Open-Mindedness

      1. Debating Sam Harris on Truth and Religion

      In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

      However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

      1. The Slavoj Žižek Debate on Marxism

      Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

      Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.

      1. Gender, Biology, and Selective Science

      Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.

      For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.

      The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness

      Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable. The Risks of “Humility Behind the Mote”

      Creates the Illusion of Growth Without Real Change
      
          By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs.
      
      Reinforces Ideological Silos
      
          Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives.
      
      Undermines Genuine Inquiry
      
          If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative.
      
      Encourages Polarization
      
          By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.
      

      Conclusion: The Responsibility of Public Intellectuals

      Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.

      For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.

      So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi’s mansion

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 days ago

    The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      6 days ago

      I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can’t even do basic math or summaries of a PDF and the data contained within.

      I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it’s the exact opposite. The first AI’s aren’t emotionless robots, they’re whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it’s still just a huge novelty.

      • Phoenicianpirate@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn’t understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him… not stopping to think about the implications of what he is doing.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      I’ve found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn’t give me any working suggestions or correct information. Stuff I’ve asked about Docker was much better.

      • vortic@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I’m not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.

  • ColeSloth@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    7 days ago

    I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      7 days ago

      AI makes it worse though. People will read a website they find on Google that someone wrote and say, “well that’s just what some guy thinks.” But when an AI says it, those same people think it’s authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it’s going to get far, far worse.

      Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 days ago

          I am not worried about people here on Lemmy. I am worried about people who don’t know much about computers at all. i.e. the majority of the public. They think computers are magic. This will make it far worse.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 days ago

            I don’t think those people are still the majority in 20 years…

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  6 days ago

                  How old are you that 20 years is not so long?

                  And also, why does that matter that it’s not so long? Have you even bothered noticing all the damage Trump has done in under a month?

                  His administration just fired a bunch of people responsible for keeping U.S. nuclear weapons secure without knowing what their jobs were.

                  Less than one month.

    • VitoRobles@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      I know a guy who ONLY quotes and references YouTube videos.

      Every topic, he answers with “Oh I saw this YouTube video…”

  • gramie@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    6 days ago

    I was talking to someone who does software development, and he described his experiments with AI for coding.

    He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

    However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

    • BigBenis@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 days ago

      I’m a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren’t included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I’m looking for.

      If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.

      • vortic@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don’t care to learn the material because I’m never going to touch it again. In my case, for python or golang, I’ll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I’m never going to remember how to properly use a negative look-behind in regex when I need it again in five years.

        I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn’t, then going to read additional information on the topic.

      • gramie@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn’t thought of, that is better suited to the problem.

  • /home/pineapplelover@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    7 days ago

    Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won’t do that successfully. However, it can break down the regex and explain it to you.

    Ofc you all can say “just read the damn manual”, sure I could do that too, but asking an generative a.i to explain a script can also be as effective.

      • /home/pineapplelover@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        As I was learning regex I was wondering why the * doesn’t act like a wildcard and why I had to use .* instead. That doesn’t make me lose my critical thinking skills. That was wondering what’s wrong with the way I’m using this character.

    • Tangent5280@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 days ago

          Literally everyone learns from unreliable teachers, the question is just how reliable.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            You are being unnecessarily pedantic. “A person can be wrong therefore I will get my information from a random words generator” is exactly the attitude we need to avoid.
            A teacher can be mistaken, yes. But when they start lying on purpose, they stop being a teacher. When they don’t know the difference between the truth and a lie, they never were.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 days ago

          I’d rather learn from slightly unreliable teachers than teachers who belittle me for asking questions.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            No, obviously not. You don’t actually learn if you get misinformation, it’s actually the opposite of learning.
            But thankfully you don’t have to chose between those two options.

    • Xatolos@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.

      It’s one thing to try to do and then ask for help (as you did), it’s another to just ask it to “do x” without thought or effort which is what the study is about.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        So the study just checks how many people not yet learned how to properly use GenAI

        I think there exists a curve from not trusting to overtrusting than back to not blindly trusting outputs (because you suffered consequences from blindly trusting)

        And there will always be people blindly trusting bullshit, we have that longer than genAI. We have enough populists proving that you can tell many people just anything and they believe.

  • underwire212@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    6 days ago

    It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.

  • kratoz29@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    6 days ago

    Is that it?

    One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

    Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

    As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          6 days ago

          A hallucination is a false perception of sensory experiences (sights, sounds, etc).

          LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.

          So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

      Society will handle it sooner or later

  • Jeffool @lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 days ago

    When it was new to me I tried ChatGPT out of curiosity, like with any tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. “Give me a list of 3 X” lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

    I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.