My experience with YouTube has taught me about this, my son had one account banned - even though he proved it was hacked and had 4 videos uploaded to it whilst we were away (and offline) which were 1. easily identified as spam and 2. Deleted on his return.
Basically, without more human interaction, you’re taking away a safety razor and giving a robot a cut throat razor.
That means that users will be less able to just say what they think, and live in fear of being banned for typing a wrong word somewhere.
My experience (not being a cringingly polite person - I don’t suffer fools gladly) with Yahoo Answers, Quora in the old days, and other platforms has backed this up. When communities become much larger, then there need to be more human moderators to keep up with posts being flagged by users, or bots, or whatever.
I’m slightly curious to see whether the future will see more AI moderation, but I can’t see that being a good solution for a few years yet.
My experience with YouTube has taught me about this, my son had one account banned - even though he proved it was hacked and had 4 videos uploaded to it whilst we were away (and offline) which were 1. easily identified as spam and 2. Deleted on his return.
Basically, without more human interaction, you’re taking away a safety razor and giving a robot a cut throat razor.
That means that users will be less able to just say what they think, and live in fear of being banned for typing a wrong word somewhere.
My experience (not being a cringingly polite person - I don’t suffer fools gladly) with Yahoo Answers, Quora in the old days, and other platforms has backed this up. When communities become much larger, then there need to be more human moderators to keep up with posts being flagged by users, or bots, or whatever.
I’m slightly curious to see whether the future will see more AI moderation, but I can’t see that being a good solution for a few years yet.