We were promised better Siri, better Alexa, better everything. Instead we’ve gotten… chip bumps.

  • Noodle07@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    34 minutes ago

    I’m still waiting for AI to correct my awful typing accuracy, that’s exactly what AI is good at. So where is it?

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 hours ago

    What really sucks is that many of these AI features rely on cloud-based servers to work. So when those servers shut down, your phone loses a bunch of features overnight!

    If AI features were only handled on-device (which is completely possible with modern hardware), then I’d be way more accepting of their use. Some features like “unblur” or “remove person” when editing an image is truly magical.

  • LiveLM
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 hours ago

    The funniest one so far for me is Samsung basing so much of its marketing lately on “Galaxy AI is here” and intending to charge for the features after 2025. Ain’t nobody paying for that lmao

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 hours ago

    On top of the tech not working as promised, it has been continuously enshittified in the interest of selling more and more crap. Alexa used to answer questions asked, which was a cute novelty, but now she tries to sell something or rambles on at length about related topics instead of answering the question and shutting up. I’m sure that was the starting goal, but it means something that was mildly convenient is now tedious and annoying to use because it has reached the monetization phase.

    Not to mention all of the ridiculous filters on top of the seemingly less accurate results over time. It could be that I use it rarely enough that the decline is more obvious, or that I mainly remember the wrong results, but even if it credits the source there is often missing context. Like the meme about John Backflip being the first person to do a backflip or the glue on pizza being from reddit misses the context that they were posted in a joke subs.

    • Showroom7561@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Alexa used to answer questions asked, which was a cute novelty, but now she tries to sell something or rambles on at length about related topics instead of answering the question and shutting up.

      My wife still uses Alexa to play music on a smart speaker we have. Sometimes, I’ll ask it to play music from an artist, then she’ll come back with “you need the upgraded plan”, blah, blah, blah. I ask again to play music from the exact same artist, and there’s no problem…

      When tech gets in the way of the task, it’s failed technology, IMO.

  • ch00f@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    5 hours ago

    Mostly unrelated, but since this is going to be a dunk on AI thread anyway, I think what’s feeding all this hubris around AI is that it’s essentially tricking us into thinking it’s intelligent. It’s an incredible tool for compressing and organizing information, but it isn’t really smart.

    And I had this thought watching a video last night of Apollo the African grey parrot. This bird has the Guiness World Record for being to correctly identify 12 different objects. But that he can speak a language that we understand doesn’t make him any more intelligent than many other animals. And when left alone without a prompt he’ll just mumble nonsense or in other words “hallucinate.” That he gets the words in order is just something that he was trained to do. It’s not his natural state.

    Anyway, I feel like AI is kind of like that. Our language-based psychology makes it seem more intelligent to us than it actually is.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      31 minutes ago

      The record is now like 26 in a minute or something, by a guy who has done videos on his bird’s training for years. I think the bird’s name is Apollo

      They have arguments over what things are, like you can’t convince Apollo a lizard isn’t a bug, because Apollo has understood a bug to be a little critter he could potentially eat. You can’t convince him ceramic tile isn’t made of rock, because he’s kinda got a point

      Apollo babbles to himself when he’s alone too, but you know what? So do I. Especially when I’m trying to pick up a foreign language, I’ll practice words until they feel natural on my tongue

      And everyone seems so quick to forget Koko or label her an exception. She basically spoke in poetry, understood mortality, and described herself as a good gorilla person when asked what she was

      Animals understand, it’s just rare to find ones that are motivated to sit and communicate on our terms. Every “special” human trait, from language to culture to sense of self and abstract thinking seems to be pretty common, we keep finding it in many animals so we keep moving the goalposts

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        The video I linked is literally of Apollo.

        Apollo has understood a bug to be a little critter he could potentially eat

        How can you know that? He only knows a handful of words. The lizard probably looks more like a bug than like a cup or Wario. He’s familiar with the phrase “what’s this?” and “what made of?” If he had any real understanding, why didn’t he just ask those questions to expand his vocabulary?

        I’m a big fan of Apollo, and he’s a lot of fun to watch, but his use of language is not demonstrative of a deeper understanding.

        And regarding Koko:

        Patterson reported that Koko invented new signs to communicate novel thoughts. For example, she said that nobody taught Koko the word for “ring”, so Koko combined the words “finger” and “bracelet”, hence “finger-bracelet”.[22][promotional source?] This type of claim was seen as a typical problem with Patterson’s methodology, as it relies on a human interpreter of Koko’s intentions.

        Other researchers argued that Koko did not understand the meaning behind what she was doing and learned to complete the signs simply because the researchers rewarded her for doing so (indicating that her actions were the product of operant conditioning)

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      To be fair, a great percentage of human to human communication is also an attempt to trick each other that we’re intelligent.

    • Showroom7561@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      I think what’s feeding all this hubris around AI is that it’s essentially tricking us into thinking it’s intelligent. It’s an incredible tool for compressing and organizing information, but it isn’t really smart.

      My son has Apple assistant (Siri), and I have Google Gemini. For shits and giggles, we had them talk to each other… literally have a conversation… and it got stale very quickly. There’s no “person” behind artificial “intelligence”, so you can see just how limited it gets.

      I’ve always said that if you know a lot about a topic, you can very quickly see how AI is really stupid for the most part. The problem is that if you ask it a question that you don’t know the answer to, then it for sure seems correct, even when it completely hallucinates the response.

      The danger is that not everyone has enough critical thinking skills to question the correctness of an answer, so they hear what Siri or Gemini told them as fact… and then pass that knowledge onto other actual human beings. Like a virus of misinformation.

      • ahornsirup@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        It’s not even a lack of critical thinking skills, necessarily. Companies don’t exactly highlight the fact that their AIs are prone to hallucinating. I’d be willing to bet actual money that a lot of users aren’t even aware that that’s a possibility.

        • Showroom7561@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 hours ago

          Companies don’t exactly highlight the fact that their AIs are prone to hallucinating.

          Funny enough, if you question Gemini enough, it will eventually cave and admit that it’s answers aren’t always right and that it’s still improving. LOL

          The problem is, as I suggested, you already need to know what the correct answer is in order to effectively cross-examine what it told you.

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        The danger is that not everyone has enough critical thinking skills to question the correctness of an answer

        I brought this up to my mom who responded with “yeah, but there’s a lot of incorrect information online anyway.” This is true, but AI strips away 100% of the context for that information, and if the AI people have their way, there will be no other portal online with which to get a second opinion.

        • Showroom7561@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          “yeah, but there’s a lot of incorrect information online anyway.”

          Here’s the thing: before AI, most information came from an author or organization, who had to stake their reputation on the content they create. If the information they provided was false, low quality, misleading, etc… they paid a penalty for it in a loss of credibility (and even income).

          But with AI, that doesn’t happen. You can generate 1000 articles at the click of a button, post it everywhere, and there’s no backlash because the author doesn’t exist.

          I think in the near future, you’ll start to see certification for human-generated content. I know that movies have started to disclose whether AI generated content was used or not, so the trend is that people want to know.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      I wouldn’t call the verbalization in that video “nonsense.” He’s choosing to say those words and phrases, and often saying them in concert with actions we can recognize as being related. Knowing the kind of memory birds have for all sorts of things, I would also not be surprised if he was thinking about something and verbalizing those thoughts - but how could we ever know that?

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I mean at one point he says “step up” while stepping on a branch. Nothing else he does seems terribly related to physical actions. And this makes sense because his brain didn’t evolve to communicate complex ideas using words.

        we can recognize as being related

        And this is my point. We’re seeing them as being related, but I think we are doing a lot of the heavy lifting here assigning intelligence where there may be a lot more random noise. Like if after being trained to identify objects he spent his time practicing identifying objects, that might convince me he’s doing something intelligent, but I think it’s more likely he just likes hearing himself vocalize.

        • Nougat@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          “No biting” stood out to me, too.

          And this makes sense because his brain didn’t evolve to communicate complex ideas using words.

          But some of them most certainly communicate with vocalization. The fact that some birds are able to mimic the non-bird sounds they hear points to their being very good with vocalization. What’s in a word besides being a set of vocalizations that communicates some meaning to another creature?

          We’re seeing them as being related, but I think we are doing a lot of the heavy lifting here assigning intelligence where there may be a lot more random noise.

          Possibly, and I’m not a bird lawyer. It starts to get kind of meta from this point. What is intelligence, and are we the arbiters of its definition?

          … spent his time practicing identifying objects, …

          Like with “step up” and “no biting”? Don’t get me wrong, you make good and valid points. I just think it’s more of a “grey” area (pun intended).

          • ch00f@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            What’s in a word besides being a set of vocalizations that communicates some meaning to another creature?

            That’s why I said “complex ideas.” Like a dog will yelp if it’s hurt, or stare out the back door when it wants out, but I wouldn’t consider that “language.”

            The only difference between yelping and what Apollo is doing is that he sounds like a person.

            And maybe discussing animal psychology is a little too off topic from my original point which is that things can seem more intelligent to us when they look or sound like people.

            Like the fact that kids can form an emotional bond with a Tamagotchi which is no more sophisticated than a Casio wristwatch speaks more to how humans assign intelligence to life-like things than to how intelligent a Tamagotchi is.

  • Bob Robertson IX @discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    I bought the Pixel 9 when it first came out and it is, without a doubt, the best phone I’ve ever had. Prior to this the Pixel 3 was what I used to judge my other phones by.

    But, it isn’t AI that makes the phone great… it just works very well, is powerful and has a great camera (oh, I suppose the AI parts of the camera help a ton). As far as Gemini goes, with the purchase of my phone I received access to Gemini Pro (or whatever they call it), it’s the service they charge $20 per month for… and it is absolute shit. It’s fun for creating AI generated images, and does a pretty good job at it. But as a replacement for Google it is worse than useless. It absolutely refuses to answer any questions about politics. It gives incorrect answers to the most simple of questions (I was tired one night and just wanted a sanity check on a simple math problem, and it confidently told me that my answer was correct… it wasn’t, I was off by a factor of 100). And it is making Google Assistant even more unreliable than it had already become.

    AI didn’t ruin the gadget, my phone is still an amazing phone AND I’ve been able to use it to access my own AI system running at home. But the idea that corporate owned AI systems are something that we should strive towards, or even consider using is the main issue.

      • Bob Robertson IX @discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        59 minutes ago

        Ollama, Open WebUI & Automatic1111… I have grand plans, but the rate of change is holding me back from putting in too much effort right now. It is amazing how simple it is to set these up on my Macbook, and how well they run.

  • ch00f@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    I think one major issue is that we’ve somehow ran out of hardware to upgrade (except the camera I guess), so now we’re tying software upgrades to hardware.

    Like, I was willing to play ball when Siri came out and my iPhone 4 couldn’t do it (had to upgrade to 4s). Like I knew that Siri was just an app on the cloud, but I figured it needed some hardware to preprocess audio or something?

    But why the hell can’t these AI features just work on current phones? Oh, because the business model requires selling more hardware. So are we just assuming that nobody will pay money for AI assistants?