I feel like the TikTok ban is only the start.

The US is pissed that it couldn’t 100% control the narrative on Israel genociding Palestine and sees the internet as the reason why. They’ve already put a lot of effort into homogenising and controlling the narrative on most big social media sites. I wouldn’t be surprised if they started cracking down more under the guise of “stopping misinformation”

  • davel [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    7 months ago

    Many discussions about social media governance and trust and safety are focused on a small number of centralized, corporate-owned platforms that currently dominate the social media landscape: Meta’s Facebook and Instagram, YouTube, Twitter, Reddit, and a handful of others. The emergence and growth in popularity of federated social media services, like Mastodon and Bluesky, introduces new opportunities, but also significant new risks and complications. This annex offers an assessment of the trust and safety (T&S) capabilities of federated platforms—with a particular focus on their ability to address collective security risks like coordinated manipulation and disinformation.

    Centralized and decentralized platforms share a common set of threats from motivated malicious users—and require a common set of investments to ensure trustworthy, user-focused outcomes. Emergent distributed and federated social media platforms offer the promise of alternative governance structures that empower consumers and can help rebuild social media on a foundation of trust. Their decentralized nature enables users to act as hosts or moderators of their own instances, increasing user agency and ownership, and platform interoperability ensures users can engage freely with a wide array of product alternatives without having to sacrifice their content or networks. Unfortunately, they also have many of the same propensities for harmful misuse by malign actors as mainstream platforms, while possessing few, if any, of the hard-won detection and moderation capabilities necessary to stop them. More troublingly, substantial technological, governance, and financial obstacles hinder efforts to develop these necessary functions.

    As consumers explore alternatives to mainstream social media platforms, malign actors will migrate along with them—a form of cross-platform regulatory arbitrage that seeks to find and exploit weak links in our collective information ecosystem. Further research and capability building are necessary to avoid the further proliferation of these threats.