While this sounds good, it is just a proposed law. It must be enacted and then it must be enforced.. EU has laws yet all the social media companies have done very little about the trash on their sites. Twitter is promoting mis- and disinformation to all its users. YT said it would stop removing false election fraud videos. We’ll see if this goes anywhere.
This is good. I used to think that, here in the US, the 1st amendment was a wonderful thing. Watching the anti-science and magat growth based on straight up lies has made me question that belief. As hawkwind says, it’s important to know who is deciding if something is fake, but so often if it gets to that point it’s like we’re missing out on fucking common fucking sense. If someone can’t tell that an election wasn’t stolen, or that a person isn’t a criminal when they admit to crimes, well…fuck 'em.
There was a good podcast about the first amendment by NPR. They talked about how it was never intended to mean what we think it does. Mostly about having to charge for the press. Even at the time, people were protesting the war and judges were putting them in jail despite the first amendment.
(I think it’s this one: https://radiolab.org/podcast/what-holmes)
The thing is that most people assume that the truth will always come through. I think we’ve seen over the years that that’s not the case. The first amendment is and should be an eternal debate about what’s true and what’s acceptable.
Thanks, added it to my player.
Who decides what is fake news? The metanews agencies? The government? Which one?
Well, this article does a pretty job of outlining what fake news is and how to identify it.
Its usually pretty easy to tell if a story is sourced and verifiable.
He’s not asking how to spot it. He’s asking who gets to be the ultimate arbiter of fakeness?
Even reputable news sources make mistakes. Sometimes their sources give bad information. Maybe they reported in good faith, but with bad information?
What happens when they work around it by JAQ-ing off. https://rationalwiki.org/wiki/Just_asking_questions
True, no matter how careful news outlets get things wrong, sources turn out to be mistaken etc etc but I think this law is not about punishing reputable news sources who make mistakes.
This law is more about Demonstrably false, unverified info, masquerading as real news. (Disinformation campaigns).
Hopefully the law is nuanced enough to distinguish between mistakes, I agree that there could be potential problem for misuse if it is too vague. However something like this is REALLY needed, social media is a hotbed of bullshit since since that crap means more user engagement . It angries up the blood and keeps users hooked. And then when this stuff is left to fester users get radicalized and start overdosing on horse medicine and shouting about lizard adrenalin or whatever…
I think a law like this is necessary to make social media companies do literally anything. They clearly wont if left to their own devices.
Fines of “$2.75 million or 2 per cent of global turnover – whichever is higher.”
ayyy now we’re cookinAll penalties for to large organizations should be based on global turnover. Not only that, they should have a third metric which is based off the calculated benefit the company gained by breaking the regulation.
So if Meta complains it would cost $X to moderate effectively, they should be fined $X * 3 or whatever. If Amazon saves $500B by misclassifyjng its drivers as contractors, they should be fined $1.5T. If the company needs to file for bankruptcy because it was based on illegal practices, so be it.
“Multimillion-dollar fines” is just another term for “pocket change” in this context. Pump those numbers up!
$6.8M or 6% of global turnover, whichever is higher
As long as this is watched by a nonbiased third party, this is excellent news. I think this sort of regulation is even what social media needs to survive at this point. There’s been this standoff between governments and social media corporations with neither one wanting to be the one to regulate this content, because it’s political/corporate suicide to have it look like you’re taking a shot at “free speech”. I hate that misinformation had to get this bad before someone finally decided to regulate it.
I’d go a step further and charge the creators of misinformation content if done maliciously as well.
Who decides what’s fake?
Who? Not the same body responsible for enforcement, which is a good start. Language in the article to me suggests they’re targeting obvious disinformation spread by bots and telling platforms they need to have processes in place to manage themselves internally - so would punish the likes of Twitter who have decided that anything goes (because it saves Elon money).
But not enough information at hand yet, so best to remain skeptical (but not conspiratorial).