Good episode and apparently interesting book discussed meow-floppy

Incidental thoughts:

When you search for a word, you don’t have a map of statistical likelihood of the word appearing, you start from the concept/shape of what you want to say.

Nice thought about “you wouldn’t let dolphin be a judge, despite them being intelligent”

Why won’t ai believers relay all their life to chatgpt and stop fearing death? You have immortalized yourself allegedly then.

Also interesting thought about godel incompleteness applying between matrix operation and natural neural networks

    • plinky [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      They are both introductory, but the first part is more focused, this one is more rambly. They haven’t reached the book yet, more discussing author and his ideas in general

      • MerryChristmas [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Took a little break from Hexbear because I found myself getting a little too heated around this particular topic, but I will be back with my thoughts this afternoon. I really appreciate you thinking of me with this and reaching out!

      • MerryChristmas [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Some random thoughts I jotted down while I was listening to Part 1:

        • I gotta show my wife this fungi episode they’re referencing.

        • On representing neurons with binary, I have to admit I struggle with this one as well. I am trying to think of zero as an abstraction of infinity approaching one end of a closed set and one as an abstraction of infinity approaching the other end. We can zoom in on a point in that set for greater specificity, but the further we zoom the less information we have about how that point relates to the rest of the set in that given moment.

        What’s counterintuitive is that this is a top-down approach and a bottom up approach at the same time. Zero is defined by its relationship to one and one is defined by its relationship to zero. We don’t have a true measure of distance between zero and one without an additional point existing outside of the set to serve as a frame of reference, but then that creates a new set of zero to one.

        I’m not sure where I’m going with all of this, but it’s left me confused in a good way. I’m hoping they dig into this a little more in Part 2. I’m also hoping I have the math literacy to understand it because I didn’t start taking an interest in math until well after I was done taking math classes…

        • The conversation surrounding autism, AI and the validity of alien minds is particularly relatable to me, and I’d also like to add schizophrenia to the discussion. Both autism and schizophrenia are spectrum disorders. The traits that make up these disorders exist to varying degrees in the general population, but we only say someone is autistic or schizophrenic when those traits reach a threshold where they are no longer considered desirable by neurotypical society. I have an intuition that some combination of these traits are an inherent part of the human conscious experience.

        Will these traits be replicated in artificial minds to varying degrees as we begin to develop intelligences for more specialized tasks? And if so, how might that change the way these traits are valued more generally?

        • I’m really enjoying how cautious the hosts are toward anthropomorphizing AI while still addressing the ethical questions that arise from that anthropomorphism. Open to the possibilities without lending credit to them. I think that is an attitude that ought to be cultivated on the left.

        I’ve got Part 2 in my queue!

        • plinky [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I actually think they are doing slight disservice on how the neurons work in both neural nets and living brain. While you can say single node collapses into 1/0, actually due to large matrix operations its more like 100 neurons collapse into 5.5, 10.7, etc on some following layers neuron. Crucially though, real neurons can have 10000 connections, wildly outstripping forward feeding NN, and they are as you say not so binary.

          I feel like those questions are still describing human mind, not ai capabilities. Despite being more positive about ai (its all ml meow-tableflip ) as technological achievements, i don’t think they are even scratching the surface of consciousness of a dog, despite dog not being able to speak. They are performing likelihood operations on the whole internet, so if your conversations are with redditors, ai prolly can simulate you whole reddit thread, but not why people saying things they are saying.

          And yes anthromorphizing ai is human mind empathy (you can empathize with plush toys for gods sake), that people should reject (i think outright, but maybe just resist).

          I feel like chatgpt is close as something like expanse ai, you can talk to it, make it do stuff, but its still dumb as hell.

          Its an impressive thing, and will help people, but thinking its your friend is wild

          TLDR. Hexbear kneejerk reaction to general ai claims is correct, rejection that it is a fucking impressive thing - is not correct. People getting caught in being empathy machine with a data center deserve our empathy

          • MerryChristmas [any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I agree with you on the empathy issue, but here’s where I hesitate to say it should be rejected outright:

            I’ve had some interesting conversations with myself using GPT4 as a sort of funhouse mirror, and even though I recognize that it’s just a distorted reflection… I’d still feel guilty if I were to behave abusively towards it? And I think maybe that’s healthy. We shouldn’t roleplay engaging in abuse without real-world consequences if for no other reason than because it makes us more likely to engage in abuse when there are actual stakes.

            In this scenario, the ultimate object of my empathy is my own cognitive projection, but the LLM is still the facilitator through which the empathy happens. While there is a very real danger of getting too caught up in that empathy, isn’t there also a danger in rejecting that empathetic impulse or letting it go unexamined?

            • plinky [he/him]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              The problem as i see it (and im not a psychologist or whatever) is you dont have feeling towards your mirror for example, your brain adapted to your reflection not being a real thing at like 2-3 years.

              Brain doesn’t have natural defenses against empathising with llm (even with eliza people were ready to go tell the program their secrets). And feeling aren’t logical (as in, you can know its bullshit and still feel some fulfillment from such conversations). They will (in the podcast) prolly discuss what author thought of that phenomenon with eliza, but i can see on a large scale that being a problem with atomized society, that noticeable amount of people will drop out into llm fantasies.

              I don’t think there is a danger in rejecting empathy. I like some plush toys from my childhood, i would be hurt if something happened to them, i wouldn’t hurt them, but i also don’t empathize with them.