Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not

    Sounds a bit like the LLM hype riders in this thread, too.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        Garbage in, garbage out, and the very logical rational computer touchers took a big dose of hype marketing before coming here.

        • silent_water [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          13
          ·
          1 year ago

          Garbage in, garbage out

          what’s extremely funny to me is that this exact phrase was used when I was in college to explain why you shouldn’t do exactly what the OpenAI team later did, in courses on AI and natural language processing. we were straight up warned not to do it, with a discussion on ethics centered on “what if it works and you don’t wind up with model that spews unintelligible gibberish?” (the latter was mostly how it went back then - neural nets were extremely hard to train back then). there were a couple of kids who were like "…but it worked… " and the professor pointedly made them address the consequences.

          this wasn’t even some liberal arts school - it was an engineering school that lacked more than a couple of profs qualified to teach philosophy and ethics. it just used to be the normal way the subject was taught, back when it was still normal to discourage the use of neural nets for practical and ethical reasons (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage).

          I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.

          agony-acid

          instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago. I feel old.

          • UlyssesT [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage)

            Microsoft Tay, after one day exposed to internet nazis sus-torment

            I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.

            agony-deep

            instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago

            And like the LLMs themselves, they’ll confidently be wrong and assume knowledge and mastery that they simply don’t have, as seen in this thread. berdly-smug

            • silent_water [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              1 year ago

              Microsoft Tay, after one day exposed to internet nazis

              even less coherent. a neural net trained on a bad corpus won’t even produce words. it’s like mashing your face on the keyboard in a way that produces things that sound like words, inserted into and around actual, incomprehensible text. honestly, reading what gpt3 produced, I think that what was happening to a degree and they were doing postprocessing to extract usable text.

              And like the LLMs themselves, they’ll confidently be wrong and assume knowledge and mastery that they simply don’t have, as seen in this thread.

              did they get banned? I expected more angry, nonsense replies. “did chatgpt write this?” is such a fun and depressing game.