• Mike@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    8
    ·
    6 months ago

    I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.

    • scoobford
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      6 months ago

      That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.

      Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      6 months ago

      There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.

      The need to ban AI CSAM is even clearer than cartoon CSAM.

      • Madison420@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        edit-2
        6 months ago

        And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.