Several months ago Beehaw received a report about CSAM (i.e. Child Sexual Abuse Material). As an admin, I had to investigate this in order to verify and take the next steps. This was the first time in my life that I had ever seen images such as these. Not to go into great detail, but the images were of a very young child performing sexual acts with an adult.

The explicit nature of these images, the gut-wrenching shock and horror, the disgust and helplessness were very overwhelming to me. Those images are burnt into my mind and I would love to get rid of them but I don’t know how or if it is possible. Maybe time will take them out of my mind.

In my strong opinion, Beehaw must seek a platform where NO ONE will ever have to see these types of images. A software platform that makes it nearly impossible for Beehaw to host, in any way, CSAM.

If the other admins want to give their opinions about this, then I am all ears.

I, simply, cannot move forward with the Beehaw project unless this is one of our top priorities when choosing where we are going to go.

  • apis@beehaw.org
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    So, so sorry you had to see that, and thank you for protecting the rest of us from seeing it.

    On traditional forums, you’d have a lot of control over the posting of images.

    If you don’t wish to block images entirely, you could block new members from uploading images, or even from sharing links. You could set things up so they’d have to earn the right to post by being active for a randomised amount of time, and have made a randomised number of posts/comments. You could add manual review to that, so that once a member has ostensibly been around long enough and participated enough, admin look at their activity pattern as well as their words to assess if they should be taken off probation or not… Members who have been inactive for a while could have image posting abilities revoked and be put through a similar probation if they return. You could totally block all members from sharing images & links via DM, and admin email accounts could be set to reject images.

    It is probably possible to obtain the means to reject images which could contain any sexual content (checked against a database of sexual material which does not involve minors), and you could probably also reject images which could contain children and which might not be wholesome (checked against a database of normal images of children).

    Aside from the topic in hand, a forum might decide to block all images of children, because children aren’t really in a position to consent to their images being shared online. That gets tricky when it comes to late teens & early 20s, but if you’ve successfully filtered out infants, young children, pre-teens & early teens as well as all sexual content, it is very unlikely that images of teenagers being abused would get through.

    Insisting that images are not uploaded directly, but via links to image hosting sites, might give admin an extra layer of protection, as the hosting sites have their own anti-CSAM mechanisms. You’d probably want to whitelist permitted sites. You might also want a slight delay between the posting of an image link and the image appearing on Beehaw - this would allow time for the image hosting site to find & remove any problem images before they could appear on Beehaw (though I’d imagine these things are pretty damn fast by now).

    You could also insist that members who wish to post images or links to images can only do so if they have their VPN and other privacy preserving methods disabled. Most members wouldn’t be super-enthused about this, until they’ve developed trust in the admin of the site, but anyone hoping to share images of children being abused or other illegal content will just go elsewhere.

    Admin would probably need to be able to receive images of screenshots from members trying to report technical issues, but those should be relatively easy to whitelist with a bot of some sort? Or maybe there’s some nifty plugin for this?

    Really though, blocking all images is going to be your best bet. I like the idea of just having the Beehaw bee drawings. You could possibly let us have access to a selection of avatars to pick, or have a little draw plugin so members can draw their own. On that note, those collaborative drawing plugin things can be a fun addition to a site… If someone is very keen for others to see a particular image, they can explain how to find it, or they can organise to connect with each other off Beehaw.

    • jarfil@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 year ago

      block new members from uploading images

      I’ve tried those methods something like 10 years ago. It didn’t work; people would pose as decent users, then suddenly switch to posting shit when allowed. I’m thinking nowadays, with the use of ChatGPT and similar, those methods would fail even more.

      Modern filtering methods for images may be fine(-ish), but won’t stop NSFL and text based stuff.

      Blocking VPN access, to a site intended as a safe space, seems contradictory.

      anyone hoping to share […] illegal content will just go elsewhere

      Like someone else’s free WiFi. Wardriving is still a thing.

      draw plugin so members can draw their own

      That can be easily abused, either manually or through a bot. Reddit has the right idea there, where they have an avatar generator with pre-approved elements. Too bad they’re pretty stifling (and sell the interesting ones as NFTs).

      • apis@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Yup, as it gets ever easier to overwhelm systems, there are no good solutions to the matter, aside from keeping it text only + Beehaw’s own drawings.

        • jarfil@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Some text-only creepepastas are equally disturbing and illegal in some places. IIRC some Lemmy instance in Ireland had to close shop because their legislation applies to both “images” and “descriptions of images”.

          • apis@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            True, but this is assuming one wishes to have a place to communicate online at all.

            And though text can be intensely disturbing, it is inherently different to images/footage of actual children actually being harmed.

            • jarfil@beehaw.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Yeah… you’ll have to excuse me, because while I’d love to delve deeper into the philosophy of perception, the art of rhetoric, or how the AIs can upend it all… I’ll have to leave it here, since I’ve been told in no uncertain terms that this is not the place to discuss this kind of stuff.

              Maybe we could meet in some other safe space, focused on pure intellectual discussions, if such existed.

              • apis@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                That’s fair.

                Not currently using other spaces, nor aware of any suited to the topic (gladly, I suspect).

                • jarfil@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  There are some… just not safe, and/or not intellectual. I’d start one, but seeing the shitstorms going over here, and my current IRL drama, I kind of don’t feel like it ATM.