As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • Not_Alec_Baldwin@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    5
    ·
    1 year ago

    I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.

    There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.

    But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.

    I expect AI to be a very similar monster.

    If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.

    • Orphie Baby@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      17
      ·
      edit-2
      1 year ago

      Current “AI” doesn’t solve problems. It doesn’t understand context. It can’t see fingers and say “those are fingers, make sure there’s only five”. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.

      • magic_lobster_party@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.

        You can use the more specific terms “weak AI” or “narrow AI” if you prefer.

        Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).

        • Orphie Baby@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          1 year ago

          Yeah, well, “AGI” is not the end result of this generative crap. You’re gonna have to start over with something different one way or another. This simply is not the way.

        • Orphie Baby@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          12
          ·
          edit-2
          1 year ago

          No? There’s a whole lot more to being human than being able to separate one object from another and identify it, recognize that object, and say “my database says that there should only be two of these in this context”. Current “AI” can’t even do that much-- especially not with art.

          Do you know what “sapience” means, by the way?