cross-posted from: https://lemmy.zip/post/15863526

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison

  • PoliticalAgitator@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    7 months ago

    Because they are images of children being graphically raped, a form of abuse. Is an AI generated picture of a tree not a picture of a tree?

    • Daxtron2@startrek.website
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      7 months ago

      No it isn’t, not anymore than a drawing of a car is a real car, or drawings of money are real money.

              • PoliticalAgitator@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                7 months ago

                If Paedophile Hill is the hill you want to die on, it’s no loss to me, so I’ve got zero interest in your “Ceci n’est pas une child rape” defense.

                • Daxtron2@startrek.website
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  edit-2
                  7 months ago

                  And yet you still engaged with it. If we’re gonna classify every picture/drawing/gen that makes people uncomfortable as CSAM it distracts from the actual CSAM that is running rampant

                  • PoliticalAgitator@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    I’m not engaging for your benefit, which is why I’ve got no interest in repeating the same point in 500 ways in the hope it sinks in. But the reality is that a lot of people get their opinions from social media and they sure as fuck shouldn’t imitate your views on CSAM so it’s important that nobody mistakes contrarianism and apologism for actual wisdom.

                    But yes, it is hard to stand by while you lie your little heart in a way that helps paedophiles. I’m not ashamed or embarrassed about that.

                    So here’s how it will play out: Your bullshit apologism and enabling will result in the creation of platforms for circulating child pornography. This platform will immediately be flooded with pictures and videos of children being raped that are indistinguishable from “genuine” child pornography, thanks to models being trained on paedophiles back catalogue.

                    As the amount of content grows, more and more videos of actual children being raped will enter circulation, with moderators and paedophile wriggling out of it by claiming “I thought it was AI generated”.

                    New videos featuring the rape of actual children will be created and posted to these communities as child pornography normalises the abuse of children for the members. Detection and prosecution of the people responsible being functionally impossible because they’ve been buried and obfuscated by the AI generated content you insist doesn’t count.

                    But hey, at least your bullshit semantic sensibilities haven’t been offended right? That seems way more important to you than the abuse of children anyway. You’re basically a hero for selflessly safeguarding paedophiles jerk off material.

                    We’re not talking about “drawings of children being raped that make people uncomfortable”. We’re talking about pictures and videos that are indistinguishable from reality, featuring children being coereced or forced into performing every act and fetish known to pornography.

                    And you fucking know it.

      • laughterlaughter@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Nobody is saying they’re real, and I now see what you’re saying.

        By your answers, your question is more “at-face-value” than people assume:

        You are asking:

        “Did violence occur in real life in order to produce this violent picture?”

        The answer is, of course, no.

        But people are interpreting it as:

        “This is a picture of a man being stoned to death. Is this picture violent, if no violence took place in real life?”

        To which answer is, yes.

          • laughterlaughter@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            We’re not disagreeing.

            The question was:

            “Is this an abuse image if it was generated?”

            Yes, it is an abuse image.

            Is it actual abuse? Of course not.

              • PoliticalAgitator@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                7 months ago

                Images of children being raped are being treated as images of children being raped. Nobody has every been caught with child pornography and charged as if they abused the children themselves, nor is anybody advocating that people generating AI child pornography are charged as if they sexually abused a child.

                Everything is being treated as it always has been, but you’re here arguing that it’s moral and harmless as long as an AI does it, using every semantic trick and shifted goalpost you possibly can.

                It’s been gross as fuck to watch. I know you’re aiming for a kind of “king of rationality, capable of transcending even your disgust of child abuse” thing, but every argument you make is so trivial and unimportant that you’re coming across as someone hoping CSAM becomes more accessible.

              • laughterlaughter@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                7 months ago

                Well, that’s another story. I just answered your question. “Are these images about abuse even if they’re generated?” Yup, they are.

                “Should people be prosecuted because of them?” Welp, someone with more expertise should answer this. Not me.

        • Daxtron2@startrek.website
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          That has nothing to do with logic? Its pointing out that both drawings and AI gens are not really the things they might depict

    • Leg@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      7 months ago

      It’s a picture of a hallucination of a tree. Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.

      • PoliticalAgitator@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 months ago

        It’s a picture of a hallucination of a tree

        So yes, it’s a tree. It’s a tree that might not exist, but it’s still a picture of a tree.

        You can’t have an image of a child being raped – regardless of if that child exists or not – that is not CSAM because it’s an image of a child being sexually abused.

        Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.

        Okay, so who are you volunteering to go through an endless stream of images and videos of children being raped to verify that each one has been generated by an AI and not a scumbag with a camera? Peados?

        Why are neckbeards so enthusiastic about dying on this hill? They seem more upset that there’s something they’re not allowed to jerk off to than by the actual abuse of children.

        Functionally, legalising AI generated CSAM means legalising “genuine” CSAM because it will be impossible to distinguish the two, especially as paedophiles dump their pre-AI collections or feed them in as training data.

        People who do this are reprehensible, no matter what hair splitting and semantic gymnastics they employ.

        • Leg@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          7 months ago

          Hey man, I’m not the one. I’m literally just saying that the images that AI creates are not real. If you’re going to argue that they are, you’re simply wrong. Should these ones be generated? Obviously I’d prefer that they not be. But they’re still effectively fabrications that I’m better off simply not knowing about.

          If you want to get into the weeds and discuss the logistics of enforcing what is essentially thought crime, that is a different discussion I’m frankly not savvy enough to have here. I have no control over the ultimate outcome, but for what it’s worth, my money says thought crime will in fact become a punishable offense within our lifetimes, and this may well be an easy catalyst to use to that end. This should put your mind at ease.

          • PoliticalAgitator@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            The thread is about “how are they abuse images if no abuse took place” and the answer is “because they’re images of abuse”. I haven’t claimed they’re real at any point.

            It’s not a thought crime because it’s not a thought. Nobody is being charged for thinking about raping children, they’re being charged for creating images of children being raped.

            • Leg@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              If the images are generated and held by a single person, it may as well be a thought crime. If I draw a picture of a man killing an animal, which is an image depicting a heinous crime spawned by my imagination, and I go to prison over this image, I would consider this a crime of incorrect thought. There are no victims, no animals are harmed, but my will spawned an image of a harmed animal. Authorities dictated I am not allowed to imagine this scenario. I am punished for it. I understand that the expression of said thought is what’s being punished, but that is very literally the only way to punish a thought to begin with (for now), hence freedom of expression being a protected right.

              The reason this is a hard issue to discuss in this context is because the topic at hand is visceral and charged. No one wants to be caught dead defending the rights of a monster, lest they be labeled a monster themselves. I see this as a failure of society to know what to do about people like this, opting instead to throw them into a box and hope they die there. If our justice system wasn’t so broken, I might give less of a shit, but as it stands I see this response as shortsighted and inhumane.