• flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    10 months ago

    Seems like this could actually be a good thing, though, since it could destroy the market for the genuine article?

    Couldn’t this lead to a REDUCTION in the abuse of children?

    • OscarRobin@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      10 months ago

      One of the reasons that hentai etc of children is illegal in many regions is because abusers apparently use it to coerce children into performing the acts themselves, so it may not be created by abuse but may aid abusers. The same argument could be made against ‘AI’ images.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 months ago

        That seems a little broad though…I mean, a baseball bat can also be used to coerce someone into things.

        • OscarRobin@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I agree it seems a bit of odd reasoning - especially when it’s not hard to find legal depictions that could presumably be used for coercion just as easily as the illegal cartoons etc.

    • PoliticalAgitator@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Sure, but it could also normalise images of children being raped and dramatically INCREASE the abuse of children. After all, widespread access to pornography didn’t cure the world of rapists.

      Why would it be any different if it’s AI generated images of children and why should we risk our kids so that some internet pedo can jerk off?

    • jpeps@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      edit-2
      10 months ago

      I’m no expert, but I think this is the wrong take. I see two potential issues here:

      • It can be difficult for people not to escalate behaviour. Say if sexual harrasment somehow became legal or permissable. What new percentage of sexual assault might start? CSAM can be just the beginning for some people.
      • This makes CSAM more easily available, and therefore likely means more people are accessing/generating it. See point one.
      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        I think you’re edging towards thought crime territory here. No one is saying the real thing should be legal or the attraction should be socially acceptable.

        But if this reduces the market for the real thing I see that as a win.

        • jpeps@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          In my comment I’m stating that I think (again, not an expert) that it would grow the market and ultimately increase child abuse.

          Secondly on the subject of legality, of course it will depend on your country, but I believe in most places there is no distinction, at least currently, between ‘simulated’ CSAM and CSAM that directly involves the exploitation of minors. If you put a thought to paper, it’s not a thought anymore.

          I think that the attraction should more or less be socially acceptable. No one is choosing the feeling and they can’t be blamed for having those feelings. People need to be able to ask for help and receive support for what I imagine is an extremely tough thing to deal with. I do not think in the slightest that CSAM in any form, including simulated (for which hand drawn would also fit here), should be remotely socially acceptable.

    • zepheriths@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      10 months ago

      I mean the issue is, will they remain a well adjusted individual if they have access to CP. It is well known the it is. Slope when it comes to CP. Frankly I think this just makes it easier to start the issue, which is not what we want

      • FMT99@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I don’t think people who are well adjusted are likely to go down this slope. Could you be convinced to download that stuff? If someone has that interest it’s there to begin with, not caused by looking at a picture.

    • FarraigePlaisteach@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      10 months ago

      It could make it harder to find trafficked and abused children. Now they have to first separate the fake from the real. And, what a sickening job for the poor person taking on this responsibility.

  • Fal@yiffit.net
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    10 months ago

    How is it child sexual abuse content if there’s no child being abused? The child doesn’t even exist.

    • quindraco@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      Exactly. Assuming this article means the American government when it says “government”, the First Amendment firmly protects entirely fictional accounts of child abuse, sexual or not. If it didn’t, Harry Potter would be banned or censored.

      • Fal@yiffit.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Did you not read anything else in this thread and just randomly replied to me?

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      13
      ·
      10 months ago

      It is the product of abuse though. Abuse materials are used to train the ai.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        10 months ago

        No they aren’t. An AI trained on normal every day images of children, and sexual images of adults could easily synthesize these images.

        Just like it can synthesize an image of a possum wearing a top hat without being trained on images of possums wearing top hats.

        • gila@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.

          They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.

      • Sphks@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        10 months ago

        I can get “great” results trying to generate naked child with standard models for Stable Diffusion. They are not trained on abuse material. But they infer naked child on hentaï. Actually, it’s more of a problem. Most of the time I have to fight the generator not to generate sexy women. And generating sexy women you have sometimes to fight for them not looking too young.

        • gila@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          That’s because the example they gave either a) combines two concepts the AI already understands, or b) adds a new concept to another already understood concept. It doesn’t need to specifically be trained on images of possums wearing top hats, but it would need to be trained on images of lots of different subjects wearing top hats. For SD the top hat and possum concepts may be covered by the base model datasets, but CSAM isn’t. Simply training a naked adult concept as well as a clothed child concept wouldn’t produce CSAM, because there is nothing in either of those datasets that looks like CSAM, so it doesn’t know what that looks like.

  • ShittyRedditWasBetter@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    edit-2
    10 months ago

    And? The intent is to avoid harm. Not blanket shame and criminalize.

    There cat is out of the bag. You better believe people are going to do this, make images of partners they desire, and in general do anything an artist with imagination can do.

    • Asafum@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      That’s the one thing I don’t get… like I obviously completely understand regular CSAM, obviously we don’t want people making or sharing that, but if it’s like the loli art stuff then why criminalize that? No one is being hurt and if you’re worried about normalizing it for the individual consuming it then it’s kind of like enforcing a “pre-crime” mentality where we just assume that if you consume the material you’ll eventually offend for real.

      I say if it keeps them away from actual children then it’s a positive.

      • HubertManne@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        Yeah I agree with you guys. Its like the violence in video games thing. Its not actual violence. It does not really exist. Its not a gateway drug to people becoming killers. Each person will find their own things. I played like 30 seconds of the original or second gta way back and put down the controller as I felt bad but later I played the spiderman that was open world in much the same way and loved it. I play cyberpunk now with no real qualms but I guess its the fantasy element or something but it does not bother me as much.

  • NoGoldHere@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    I made an account to reply to this thread. I am a psychology student specializing in sexuality!

    So, first: yes, in most places, particularly the US, images which are indistinguishable from a real child (such as photorealism) are included as CSAM, regardless of whether or not the child “exists.” This does not include things like lolicon, cub, cartoons, every historical painting ever, etc. There is no provable harm caused by cartoon images of “sexualized minors,” as determined by the Sexological Clinic (seen here, translated from it’s original language).

    That would mean that, yes, hyper-realistic imagery of children in sexual situations does, at least in the USA, count as CSAM. As a psychology student, there is also heavy push to prevent patients with pedophilic or inhibition disorders to seek out such material, as it can and does lead to harm. Once again, I am not talking about lolicon-- I am talking about CSAM.

    Laws may vary in regards to UK, AU, CA, FR, etc. To my understanding, all of those countries claim to ban any such depiction, regardless of it’s level of toony-ness. They also allow works like Stephan King’s IT and Tony Morrison’s The Bluest Eye, which technically fall under their ban. CA also sells Interspecies Reviewers, known for it’s loli content, in bookstores. So, I suppose you should be "grain of salt"ing this one. Also note that as an American, I do not know the intricacies of other countries laws-- we are barely taught our own.

    And, no, the PROTECT act is not in effect in the USA. It was deemed unconstitutional. I’m only providing what I can as a student with a vested interest in this kind of thing.

    TL;DR
    Photorealistic AI images of CSAM is still CSAM and is still bad. AI images of loli hentai are not comparable.
    No notes on roleplay/text, that’s probably case-by-case (but is likely viewed more similarly to the loli thing).

  • Treczoks@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Stability AI, the UK company behind Stable Diffusion, told the BBC it “prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM”.

    And who in the real world expects those people to obey these terms of service?

    • Player2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Terms of service have always been only about protecting the company from any legal problems rather than anything else

    • gila@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      To their credit they’ve always had a safety checker on by default, it just isn’t very good and returns a lot of false positives so it quickly became standard practice to bypass it

  • Fungah@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    I was fine tuning a stable diffusion model on my face once and then the thought occurred to me… Skmeine, somewhere. I’d training this thing to spit out CSAM.

    I had this dark, heavy feeling for a few days after that. Something about a button that will just generate an infinite amount of child porn. Ugh it’s appalling.

    In Canada we’ve banned even cartoon pictures of minors. It seemed extreme to me at first because it’s considered to be on the same level as actual CSAM. Now, however, I’m glad. It doesn’t take a lot to feed a stable diffusion program a cartoon image and having it spit out something realistic.