Hi All,

You may have seen the issues occurring on some lemmy.world communities regarding CSAM spamming. Thankfully it appears they have taken the right steps to reducing these attacks, but there are some issues with the way Lemmy caches images from other servers and the potential that CSAM can make its way onto our image server without us being aware.

Therefore we’re taking a balanced approach to the situation, and try to take the least impactful way of dealing with these issues.

As you read this we’re using AI (with thanks to @db0’s fantastic lemmy-safety script) to scan our entire image server and automatically delete any possible CSAM. This does come with caveats, in that there will absolutely be false positives (think memes with children in) but this is preferable to nuking the entire image database or stopping people from uploading images altogether.

But this will at least somewhat guarantee (although maybe not 100%, but better than doing nothing) that CSAM is removed from the server.

We have a pretty good track record locally with account bannings (maybe one or two total) which is great, but if we notice an uptick in spam accounts we’ll look to introduce measures to prevent these bots from creating spam if they slip past the registration process - ZippyBot can already take over community creation, which would stop any new account creating communities and only those with a high enough account score would be able to do so, for example.

We don’t need (or want) to enable this yet, but just want you all to know we have tools available to help keep this instance safe if we need to use them.

Any questions please let me know.

Thanks all
Demigodrick

    • DemigodrickOPMA
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 months ago

      It is something we almost turned on pretty early on with the original spam wave, but decided not to at the time because of the added complications for new users, and because it might put people off joining lemmy.zip if they want a space to create genuine communities.

      Its a tricky one to balance, which is why I only really want to introduce it in response to something - as it stands, we have built a respectful awesome community here with very few issues, and I dont think its a measure we need. However, happy to get wider feedback from the users here if this is something they want to see introduced. Maybe with a three day and 10 score minimum or something like that.

      (Anyone interested can check their score by pm’ing ZippyBot with #score)

      • Odigo2020
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Personally, I think that’s an absolutely fair barrier to place, and is very similar to the one I used when I moderated my town’s subreddit, which caught an incredible amount of bots and bad actors before they could cause problems. But I appreciate your desire to hold off on implementing something like that until it’s necessary.

        • Draconic NEO@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I’m not really a Fan of using score as a gatekeeping method due to the fact that it in a way discriminates against users who disagree with others (downvotes are typically awarded to people who have unpopular opinions) similar to Reddit Karma requirements, and even as it is, there’s nothing that stops people from gaining the system and up-voting their own posts from puppet accounts in other instances (or even from accounts on this instance). It could also lead to people being maliciously downvoted by others to try and block them from meeting requirements.

          If downvotes were disabled and the amount required was higher I wouldn’t think it’s as bad since you can’t go down, only up, though it’s still not good due to the fact that it essentially encourages vote manipulation and Karma farming. There’s a reason Score isn’t shown by default, the Reddit way just doesn’t work on the Fediverse where vote manipulation can’t be adequately policed.

          • DemigodrickOPMA
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Yeah, that’s a valid concern and I’ll take that on board. Maybe the requirement should be that the score isn’t within a ± bracket to evidence engagement, and it’s only one of the measures that is checked (post count and account age being two other measures that come to mind) when deciding if someone is a genuine actor or not.

            Thankfully it’s not something we need at the moment but always good to have a plan.

  • Possibly linux
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    10 months ago

    Sounds cool but is there a way to contest the removal of content?

    Also I wouldn’t restrict accounts based on an “account score”. This will lead to karma farming and will make the problem worse. I strongly disagree that upvotes and downvotes should effect creditability.

    Honestly we should try not to put limitations on accounts as it hurts the growth of the fediverse.

    • DemigodrickOPMA
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Currently no - once it is gone it is gone. Obviously being CSAM material we don’t want to be viewing it ourselves (for our own sanity, plus then we’d probably be breaking local laws) and there is lots to sift through, we have over 600k image files now, so some level of automation and detection needs to take place.

      This is why many admins are requesting the devs add better moderation tools, currently the options we have a severely limited leading to situations like this where we’re having to run a script (10 hours running so far and not finished yet) to try and manage attacks like this. We also don’t know who posted it, or which communities it was posted in because of how lemmy handles images currently.

      I agree karma farming becomes an issue if we make it a barrier. We do need some kind iof a qualifier for the bot to determine who is a legitimate user and who is just spamming content though - it can be a mix of things too, including account age, score, and any other metric we can pull together from the API.

      • Possibly linux
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        What if I don’t want to have a social score? I don’t like bots but I also don’t want to feel like I’m constantly being judged (even if I am)

        The best way to spot bots is to have people report them.

        • DemigodrickOPMA
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 months ago

          By the time a bot posts, it is already too late (especially with CSAM stuff). Protecting the instance is the number 1 priority, and Lemmy API already provides 1 (maybe 2 after a recent update) scores per user that can be leveraged to determine if someone is active and legitimate, or brand new and a potential spam account.

          Like I said, its not something I want to use, but the tool exists and is ready to be used should we ever be in a position where the registration method is defeated and we’re hit by a wave of bots creating communities.

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Why would a bot create a new community? That doesn’t make sense to me.

        • Draconic NEO@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          When it comes to CSAM it’s important to stop them before they get the chance to post and host that on this Instance, so reporting isn’t really going to work here.

          Though I do agree that using score is a bad idea as it’ll foster more of the toxic Reddit culture around differing opinions, since on Reddit Low Karma actually did punish you, locked you out of communities, Rate-limit you, even increased the chances of being shadowbanned (yes you can be shadowbanned for low Karma, or at one point you could’ve been). We really don’t want to bring this kind of thing here to Lemmy, since not only does it invite toxicity, it’s also less effective here since votes can’t be as easily policed as they can on Reddit due to the Federated nature of the platform and the lower amount of Data Admins hold/collect about users.

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            My concern is that bots will start karma farming. Many subs are just taking off and its not invincible that a bot could start posting reddit content

            • Draconic NEO@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              10 months ago

              That’s also another big concern of mine, as it currently is there’s no real incentive for Upvote farming on Lemmy however that would change if score was used to gatekeep people and accounts like Karma is on Reddit and Unlike Reddit you can’t really police Upvotes and Downvotes because on separate servers you won’t have access to their emails, IP address, or cookies.

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            True, but that’s not really a great option of everything. Its better to make suggestions for non critical things