Let me break this down:

Openly defying court orders multiple times= dictatorship Threatening to jail political opponents despite them being pardoned= dictatorship Deporting American citizens who have legal citizenship= dictatorship Can’t put it any simpler.

  • dev_null@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    15 hours ago

    Do you have any evidence pointing at that? So far it seems SafetyCore is a local-only service that despite all the uproar no researchers actually found doing anything suspicious.

    And the only thing I hate more than Google is misinformation and fearmongering.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      ·
      12 hours ago

      Yeah, the Internet really went paranoid with it, which doesn’t help that it is still evil, but in a different way. Also, never feel safe by something called “local-only” as it can process on device and still fire a yes/no bit to the cloud. At its core, SafetyCore is pretty innocuous. It’s a tiny ML model interface that other applications can query to search for targeted images. Its primary purpose is to look for things like CSAM and NSFW images, apps can query the interface to check if an image is naughty and send back basically a boolean yes/no. One of their selling points is “no more dick pics in your SMS!” There’s also an ML library in the camera software that, for years, has known about looking at all sorts of things and identifying what they are, cat, dog, brown person, truck, sign.

      Google can push software onto Android phones whenever they want, this is widely know, and SafetyCore was actually pushed in that fashion. Apple can too, to be fair. On some level, there’s pretty much no reason to have any trust of your mobile device anymore when the vendor can change it whenever they want without consent, but I digress.

      Now, tying it all together: the phone contains a “safety” ML model (SafetyCore) that can detect types of image and relay a yes/no, and a camera ML model that knows what most of our known universe looks like for the purpose of running the camera. The latter is likely not even needed, given the former was pushed without consent and could be updated by the same consentless mechanism.

      The tl;dr boils down to: Google can push a query to phones to respond if they have a type of image. That type of image could be heavily illegal or terrible activity. It could also be anything the government in control wants to find. Picture of Tiananmen Square, sure. Protest signs, sure. How many phones have recent pictures of certain skin-colored people in a given square mile? Sure.

      Unfortunately not direct evidence in this closed-source future, only extrapolation potential based on available evidence and how software works, and how companies like money.

      There are some things our machines should just not do. The biggest weakness in this part of history. 25 years ago, tech evolution was limited by what computers could do. Tech evolution doesn’t have that safety baked in anymore, your phone could run for weeks “turned off” recording everything you say and transpose it to a text file in the bootloader or a secondary controller chip and you’d never know. Your phone’s battery life could be limited because every camera and microphone periodically fires to store data for later upload. The dead-reckoning sensors in the health tracking portion (the M-series coprocessors in iPhones, for example) could track your movement in a cave for miles (airplanes used to use this same tech for navigation across the planet.) Your camera’s wide-angle lens could identify everyone at your dinner table when you set it face down on the table because cell service is never good enough to leave a phone in your pocket anymore but you don’t want to disturb the meal with your entire screen turning on every 5 seconds.

      We now have to consciously choose what our machines do because they can do anything they want, but we haven’t chosen. The blind trust has run on for too long.

      Secondary tl;dr: the software is there, just assume evil intent possibility. Google, in specific, chose this time to push an image identification application to phones without consent in a time where the planet’s freedom is collectively dying. Hopefully it’s just a marketing faux pas…