cross-posted from: https://lemmy.zip/post/15863526

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison

    • laughterlaughter@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      We’re not disagreeing.

      The question was:

      “Is this an abuse image if it was generated?”

      Yes, it is an abuse image.

      Is it actual abuse? Of course not.

        • PoliticalAgitator@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          30 days ago

          Images of children being raped are being treated as images of children being raped. Nobody has every been caught with child pornography and charged as if they abused the children themselves, nor is anybody advocating that people generating AI child pornography are charged as if they sexually abused a child.

          Everything is being treated as it always has been, but you’re here arguing that it’s moral and harmless as long as an AI does it, using every semantic trick and shifted goalpost you possibly can.

          It’s been gross as fuck to watch. I know you’re aiming for a kind of “king of rationality, capable of transcending even your disgust of child abuse” thing, but every argument you make is so trivial and unimportant that you’re coming across as someone hoping CSAM becomes more accessible.

        • laughterlaughter@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 month ago

          Well, that’s another story. I just answered your question. “Are these images about abuse even if they’re generated?” Yup, they are.

          “Should people be prosecuted because of them?” Welp, someone with more expertise should answer this. Not me.