Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

Last weekā€™s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    1 month ago

    (Another post so soon? Another post so soon.)

    ā€œGen AI competes with its training data, exhibit 1,764ā€:

    exhibit 1764

    Also got a quick sidenote, which spawned from seeing this:

    This is pure gut feeling, but I suspect that ā€œAI trainingā€ has become synonymous with ā€œart theft/copyright infringementā€ in the public consciousness.

    Between AI bros publicly scraping against peopleā€™s wishes (Exhibit A, Exhibit B, Exhibit C), the large-scale theft of data which went to produce these LLMsā€™ datasets, and the general perception that working in AI means you support theft (Exhibit A, Exhibit B), I wouldnā€™t blame Joe Public for treating AI as inherently infringing.

    • blakestacey@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      edit-2
      1 month ago

      I researched cool topics using ChatGPT, Claude, Google

      Thatā€™s not what research means, you embossed carbuncle.

      I linked NotebookLM to the Wikipedia entry of each topic and generated the podcast audio

      Itā€™s fucking James Somerton with extra steps!

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        1 month ago

        jesus christ, as if ā€œpodcastsā€ consisting of random idiots regurgitating wikipedia werenā€™t bad enough as it is

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      Ā·
      1 month ago

      ā€œcuratedā€

      it shouldnā€™t surprise me that the dipshit who was massively involved in ā€œsolving languageā€ doesnā€™t understand the meaning of words, but grrrrr

      (and I say that as an armchair linguist who understands that language is as people use it (fuck prescriptivism))

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        Ā·
        edit-2
        1 month ago

        dipshit who was massively involved in ā€œsolving languageā€

        ā€œIn the what now?ā€, he said, voice trembling with a mixture of horror and excitement

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          Ā·
          1 month ago

          2015 - I was a research scientist and a founding member at OpenAI.

          proudly displayed on his blog timeline

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              Ā·
              1 month ago

              no, thatā€™s a personal extrapolation/framing characterising some of the shit Iā€™ve seen from these morons

              (they engaged with very few linguists in the making of their beloved Large Language Models, instead believing they can just data-bruteforce it; this plan gone as well as has been observed)