In this video I discuss how generative AI technology has grown far past the governments ability to effectively control it and how the current legislative measures could lead to innocent people being jailed.

  • Vendetta9076@sh.itjust.works
    link
    fedilink
    arrow-up
    61
    arrow-down
    11
    ·
    8 months ago

    While lolicon is absolutely disgusting, its not actually csam. Legislation won’t work either and is honestly a waste of time. Any effort spent protecting digital children should instead be spent protecting real ones.

    • MuchPineapples@lemmy.world
      link
      fedilink
      arrow-up
      23
      arrow-down
      15
      ·
      8 months ago

      The problem is that it’s not just cartoon characters, but also realistic looking people. That makes it, especially in the next years when the techniques improve, impossible to know what is fake and what is not and thus the fake ones should also be banned. And these models are trained on images of actual abused children, which of course is the main problem with this.

        • Microw@lemm.ee
          link
          fedilink
          arrow-up
          14
          arrow-down
          2
          ·
          8 months ago

          It wouldnt surprise me tbh. From my superficial visit to the darknet years ago, it seemed like these csam consumers have specific “favourites” among the victims whom they want to see more of. At least that’s what I remember from clicking a link to such a chan and noping out of it.

          • RaincoatsGeorge
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            8 months ago

            What isn’t happening? Them making fake csam? I haven’t seen it because I don’t want to see it but I am connnnfident it’s occurring. Some kid already got busted feeding images of girls in his class into an image generator and making nudes out of them.

            So while it might not be wide spread it’s 100 percent happening and will increase.

            Honestly releasing these generators to the general public was a mistake. They thought they could put up safety measures but they’re easily bypassed. I think they should have kept them locked up and only give access to people who are registered and trackable with people reviewing what they’re generating.

            All of these ai generators are getting abused left and right and anyone who didn’t think that would happen is an idiot.

            • FunkyCasual@lemmygrad.ml
              link
              fedilink
              arrow-up
              6
              ·
              8 months ago

              No, I’m saying the models aren’t being trained with actual CSAM. The comment I replied to was about training, not generation.

              All I was saying is that you don’t need to train a model on child abuse images to get it to output child abuse images

              • datavoid@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                8 months ago

                Do you really think the people generating CSAM give a fuck about their training data? They are making the content because they enjoy it - I’d guess they’d use all training data available (of which they would likely have plenty of, considering their interests)

                • FunkyCasual@lemmygrad.ml
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  8 months ago

                  The people generating it are rarely the ones who are training the models. They take pretrained models and prompt them for what they want.

                  Even if they were training a model for a specific subject, they could train it with any pictures of the subject and combine it with another model that can generate the kind of image they want.

                  There is absolutely no reason they would need abuse images to use for training. There are far better general nsfw models available right now than they could ever train themselves.