Hi All,

You may have seen the issues occurring on some lemmy.world communities regarding CSAM spamming. Thankfully it appears they have taken the right steps to reducing these attacks, but there are some issues with the way Lemmy caches images from other servers and the potential that CSAM can make its way onto our image server without us being aware.

Therefore we’re taking a balanced approach to the situation, and try to take the least impactful way of dealing with these issues.

As you read this we’re using AI (with thanks to @db0’s fantastic lemmy-safety script) to scan our entire image server and automatically delete any possible CSAM. This does come with caveats, in that there will absolutely be false positives (think memes with children in) but this is preferable to nuking the entire image database or stopping people from uploading images altogether.

But this will at least somewhat guarantee (although maybe not 100%, but better than doing nothing) that CSAM is removed from the server.

We have a pretty good track record locally with account bannings (maybe one or two total) which is great, but if we notice an uptick in spam accounts we’ll look to introduce measures to prevent these bots from creating spam if they slip past the registration process - ZippyBot can already take over community creation, which would stop any new account creating communities and only those with a high enough account score would be able to do so, for example.

We don’t need (or want) to enable this yet, but just want you all to know we have tools available to help keep this instance safe if we need to use them.

Any questions please let me know.

Thanks all
Demigodrick

  • Odigo2020
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Personally, I think that’s an absolutely fair barrier to place, and is very similar to the one I used when I moderated my town’s subreddit, which caught an incredible amount of bots and bad actors before they could cause problems. But I appreciate your desire to hold off on implementing something like that until it’s necessary.

    • Draconic NEO@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I’m not really a Fan of using score as a gatekeeping method due to the fact that it in a way discriminates against users who disagree with others (downvotes are typically awarded to people who have unpopular opinions) similar to Reddit Karma requirements, and even as it is, there’s nothing that stops people from gaining the system and up-voting their own posts from puppet accounts in other instances (or even from accounts on this instance). It could also lead to people being maliciously downvoted by others to try and block them from meeting requirements.

      If downvotes were disabled and the amount required was higher I wouldn’t think it’s as bad since you can’t go down, only up, though it’s still not good due to the fact that it essentially encourages vote manipulation and Karma farming. There’s a reason Score isn’t shown by default, the Reddit way just doesn’t work on the Fediverse where vote manipulation can’t be adequately policed.

      • DemigodrickOPMA
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Yeah, that’s a valid concern and I’ll take that on board. Maybe the requirement should be that the score isn’t within a ± bracket to evidence engagement, and it’s only one of the measures that is checked (post count and account age being two other measures that come to mind) when deciding if someone is a genuine actor or not.

        Thankfully it’s not something we need at the moment but always good to have a plan.