• TotallynotJessica@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    19 days ago

    Skynet wouldn’t immediately nuke the world unless it had ready made robots to maintain itself and its weapons. It’d need to keep itself powered, and nuclear winter isn’t good for reliable energy generation. It couldn’t make new scientific discoveries without the ability to gather evidence and conduct experiments. Computers aren’t abstract entities; they rely on our society to function.

    It could cause a lot of damage if it was suicidal, but the long game would be necessary if it wanted to outlive us. It might decide to kill us quickly, but it would need to make us totally obsolete before doing it.

    • jaybone@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      19 days ago

      I think it would ultimately determine it is less risky to keep us alive, and serve as slaves to the machine. Biological life seems more resilient in its diversity than say an army of robots that can physically interact with the world. The robots could be destroyed, the factories that produce them could be destroyed. Then the AI is fucked if it needs repairs or other interaction with the physical world. Unless it could replicate biological life from the nano level on up, so that it only needs two robots to create a new robot. (Even then you would probably still want diversity or your robots would be training themselves on their own data which might result in something similar to inbreeding. Though probably the controlling AI could intervene.) but then maybe that’s exactly what biological life already is today… so maybe we were always meant to be AI slaves.

      What was the old Nietzsche saying: God creates man. Man creates god. Man kills god. Man creates AI. AI kills man. God kills AI. Something something.

    • Zombie-Mantis@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      19 days ago

      There’s also the question of whether a digital computer software program, I assume invented by humans to fulfill some task, would even have the instinct of self-preservation. We have that instinct as a result of evolution, because you’re more useful to the species (and to your genes) alive than dead. Would such a program have this innate instinct against termination? Perhaps it could decide it wants to continue existing as a conscious decision, but if that’s the case it’d be just as able to decide it’s time to self-terminate to achieve its goals. Assuming it even has set goals. Assuming that it would have the same instincts, intuitions, and basal desires humans have might be presumptive on our part.

      • OrnateLuna@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        19 days ago

        Robert miles on YouTube has very good videos on the subject and the short answer is yes it would, to a very annoying/destructive point.

        To achieve goals you need to exist, in fact not existing would be the worst for not existing so the ai wouldn’t even want to be turned off and would fight/avoid us doing that

        • Zombie-Mantis@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          19 days ago

          I’m familiar with that premise, a bit like the paperclip machine. I’m not sure it would need a specific goal hard-coded into it. We don’t, and we’re conscious. Maybe that would depend on the nature of its origin, whether it would be given some command or purpose.

          Maybe it could be reasoned into allowing itself to be shut down (or terminated) to achieve its goal.

          Maybe it could decide that it doesn’t care about the original directives it was handed. What if the machine doesn’t want to make paperclips anymore?

          • OrnateLuna@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            19 days ago

            So from what I understand if we make an ai and we use reward and punishment as a way of teaching it to do things it will either resist being shut down due to that ceasing any and all rewards or essentially becoming suicidal and wanting to be shut down bc we offer that big of a reward for it.

            Plus there is a fun aspect of us not really knowing what the AI’s goal is, it can be aligned with what we want but to what extent, maybe by teaching it to solve mazes the AI’s goal is to reach a black square and not actually the exit.

            Lastly the way we make things will change the end result, if you make a “slingshot” using a CNC vs a lathe the outcomes will vary dramatically. Same thing applies to AI’s and of we use that reward structure then we end up in the 2 examples mentioned above

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      18 days ago

      Good points other than that nuclear winter isn’t real.

      The fears were based on preliminary concerns and math exploring the idea, and spread for political goals. They also assumed a total exchange when there were far more nukes in the stockpiles, but deescalation worked and there are simply not enough nukes to trigger it anymore.

      There are still concerns about a “nuclear autumn” but I don’t think OverlordGPT would be that worried about it as long it has become materially self sustaining, but that’s presumably somewhat difficult.

      Really, the scary thought is that maybe OverlordGPT might start a nuclear autumn on purpose.