• @Barbarian772 no, GTP is not more “intelligent” than any human being, just like a calculator is not more “intelligent” than any human being — even if it can perform certain specific operations faster.

    Since you used the term “intelligent” though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.

    I wrote about it in a bit longer form:
    https://rys.io/en/165.html

    • Barbarian772@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I think the Wikipedia definition is fine https://en.m.wikipedia.org/wiki/Intelligence. Excluding AI just because it’s AI is imo plain stupid and goes against all scientific principles.

      I have definitely met humans that are less intelligent that Chat GPT. It can hold a conversation and ace every standardized test we have. It finished law exams, medical exams and other exams from many different countries with a passing grade.

      Can you give me a definition of intelligence that excludes Chat GPT and includes all human beings? And no just excluding Computers for the sake of it doesn’t count.

      • @Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.

        That’s partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.

        So, glad we established that!

        • @Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded “AI”. I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.

          • lloram239@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            I asked for one that excluded simple calculators but included human beings.

            “Intelligence, at its core, involves the ability to model the world in order to predict and respond effectively to future events.”

              • lloram239@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                The whole argument of the article is just stupid. So ChatGPT ain’t intelligent because it can’t see picture, has hands and doesn’t have a body? By that logic blind humans aren’t either or paralyzed ones or amputees? The thing the article fails to realize is that those are all just sensory inputs. The more sensory inputs you get, the more cross-correlations between those the AI can figure out. Of course ChatGPT won’t be able to do anything clever with sensory inputs it doesn’t have, just like a human trying to listen to radiowaves with their ears. But human sensory inputs aren’t special, they are just what evolution figure out was “good enough” for survival. The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

                • @lloram239

                  > But human sensory inputs aren’t special

                  It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

                  > The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

                  GPT cannot “figure” anything out. That’s the point. It only probabilistically generates text. That’s what it does, there is no model of the world behind it, no predictions, no"figuring out".

                  • lloram239@feddit.de
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

                    And how do you think that model gets build? From processing sensory inputs. And yes, language models do build internal models of the world from that.

                    GPT cannot “figure” anything out.

                    That nonsense of a claim doesn’t get any more true from repeating. Seriously, it’s profoundly idiotic given everything ChatGPT can do.

                    It only probabilistically generates text.

                    So what? In what way does that limit its ability to reason about the world? Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.

            • Barbarian772@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              How can i proof it? In my opinion how a system comes to an answer doesn’t matter, in yours it obviously does. If we judge Chat gpt or rather gpt 4 just by it’s answers it definitely shows intelligence and reasoning. Why does it matter if it’s a chinese room? Or just “randomly choosing words”?

              • @Barbarian772 it matters because with regard to intelligent beings we have moral obligations, for example.

                It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.

                It’s an extremely strong claim, and strong claims demand strong proof. Otherwise they are just hype and hand-waving, which all of the “ChatGPT intelligence” discourse is, in order to “maximize shareholder value”.

                • Barbarian772@feddit.de
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  So your morality depends on a beings intelligence? That’s kinda fucked up imo. I have moral obligations in regards to living organisms. I don’t see how intelligence matters at all in that case? Worth of any human life should not be determined by intelligence.

                • jorge@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.

                  We do it routinely. It is called Education System.

                  • @jalda

                    > We do it routinely. It is called Education System.

                    That relies on human brains that are trained. LLMs are not human brains. “Training” them is not the same thing as teaching humans about something. Human brains are way more complicated than just a bunch of weighed correlations.

                    And if you do want to claim it is in fact the same thing, we’re back to square one: please provide proof that it is.