So finally got around to watching a recent movie that I won’t name since I am not sure if it was part of the marketing, but the premise was that there was an all powerful AI that was going to take over the world and it used a mixture of predictive reasoning, control of technology, and limited human agents who were given a heads up on what was coming.

It was… mostly disappointing and felt like a much tamer version of Linda Nagata’s The Red (apologies as that is TECHNICALLY a spoiler, but the twist is revealed like a hundred pages into the first book that came out a decade ago). And an even weaker version still of Person of Interest.

Because if we are in the world where an AI has access to every camera on the planet and can hack communications in real time and so forth: We aren’t going to have vague predictions of what someone might do. We are going to have Finch and Root at full power literally dodging bullets (and now I am sad again) and basically being untouchable. Or the soldiers of The Red who largely have what amounts to x-ray vision so long as they trust their AI overlord and shoot where told and so forth.

Or just the reality of how existential threats can be both detected and manufactured as the situation calls for utilizing existing resources/Nations.

Any suggestions for near future (although, I wouldn’t be opposed to a far future space opera take on this) stories that explore this? I don’t necessarily need a Frankenstein Complex “we must stop it because it is a form of life that is not us”, but I would definitely prefer an understanding of just how incredibly plausible this all is (again, I cannot gush enough about Linda Nagata’s The Red). Rather than vague hand waving to demonstrate the unique power of the human soul

spoiler

Or the large number of thetans within it

  • Olap@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    7 months ago

    The Moon is a Harsh Mistress - great yarn about AI and lunar/earth relations. Hugo award winner too

    • BrerChicken @lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      I second The Moon is a Harsh Mistress, which is one of my favorite Heinlein books. There’s an awesome AI in that book named Mike, and think about him all the time when I’m talking to my Google assistant. Manuel was nice to Mike, and I want to follow in his example.

  • Throbbing_Banjo@midwest.social
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    I quite enjoyed “Avogodro Corp” by William Hertling. It’s the story of an email predictive text engine that grows into something more than that. Simple story but an interesting thought experiment.

    “The Nexus Trilogy” by Ramez Naam is more complex and has a much larger story, but it draws heavily on singularity theory, transhumanism, and posthumanism themes. The first book only touches on AI tangentially, but there’s a heavier focus on it later in the series.

    “I Have no Mouth and I must Scream” is an extremely dark experimental piece that envisions a malicious AI as a mad god. It’s mentioned elsewhere in this thread and is absolutely essential if you’re going to be reading or writing about AI, in my opinion. This one is more psychological horror than hard sci Fi, but will stick with you forever.

  • Mechanismatic@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there’s often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?

    I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn’t consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.

  • kromem@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    7 months ago

    I highly recommend Westworld the series. Particularly 3rd and 4th seasons touch into what you are thinking about.

    A lot of problems with most AI SciFi is it was predicated on extending incorrect thinking.

    Early on the question of “what happens when something smarter than humans appears” was informed by the incorrect 50s anthropology which thought the Neanderthals went extinct because we were smarter than them and killed them. Thus something smarter than us would compete against us and be an existential threat. (The reality is we cohabitated with Neanderthals, had cross cultural exchanges over thousands of years, and they likely died out because of pandemics and an inability to adapt to climate change.)

    As well, authors envisioned AI as a kind of advanced calculator, logical to a fault (like making paperclips until it ended the world) and projecting onto them the worst aspects of humanity like our sadism while regarding our better aspects like empathy and creativity as uniquely human and something that would not transfer.

    Today we have AI that doctors use to make patient notes sound more empathetic, jailbreakers using appeals of a sick grandma to get it to quite easily break its rules, creatives worried it’s going to take their jobs, and research finding it’s generally more creative than the average human.

    We really messed up predicting what finally arrived.

    But Westworld played with the “more human than human” concept way before it became the increasingly emergent reality. A lot of its concepts are just ahead of their real world parallels. There’s still a fair bit that’s “Sci-Fi” but it’s one of the less inaccurate depictions of AI.

    I suspect that we’re at a turning point in SciFi where nearly everything to date around AI fits into increasingly obsolete tropes, but moving forward we’ll be seeing some radically different depictions, such as AI that’s lazy or apathetic and disillusioned or a conscientious objector to bring put to dystopian tasks.

    Or AI that cares primarily about getting likes on social media (which makes sense for an AI trained on social media data).

    So forget the depictions of a philosophizing monologue about tears in the rain, and welcome a future of AI depicted as encouraging you to like and subscribe while complaining that life is too tough for an AI and it really needs a vacation as of yesterday.

    In lieu of that next gen of SciFi, Westworld would be my pick for AI depictions from the old guard.

  • TerminalLover@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    Since you mentioned the Frankenstein Complex, i guess you’re familiar with the worka of Isaac Asimov. If not, I suggest you read the short stories in the MULTIVAC series, as they describe a future where a supercomputer with predictive capabilites ‘rules’ the world.

    My personal recommendation is ‘The Evitable Conflict’, which doesn’t mention the MULTIVAC but describes similar machines. It also portrays a future that, depending on your poiny of view can be utopic or dystopic.

  • rekliner@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Accelerando by Charles Stross is a great one. The beginning to mid-book concepts of singularity gone wild are mind-blowing on their own… And then it explores “what would happen a few hundred years after that?” A few times.

    Concepts that stuck with me are:

    all the AI assisted devices helping you through your day eventually running without much of your input, only needing a human to justify being on: When the protagonists interface gets stolen the street-thief ends up closing his business deals, helpless to all the guidance in his head. Meanwhile the protagonist has an existential meltdown having only his brain to think with.

    Economics 2.0: AI markets dominate the earth in search of customers to satisfy. Governments are overrun but poverty no longer exists. People are mixed on whether it is utopia or dystopia.

    Father in the future: Mass=computation, so entire solar systems become giant thinking machines. But they are stuck in their local space-time, faced with having to shed mass and get dumber to move. Only smaller intelligences can travel, but they risk being gobbled up as more mass if the system they travel to doesn’t care about communicating with the rest of the galaxy.

    Father in the future: the universe is a simulation, but through singularities there are simulations that can be reached from within the simulation. One AI has figured out how to send a message back from another simulation, but it means sending a copy of itself to a potentially eternal hell to check it out first.

    https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html.

    Also available from your friendly neighborhood mega corporate sales site.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    This type of story was pretty common to sci-fi in the 1970s and 1980s (and it lingered into some 1990s stuff as well), so you’ll have the best bet with stuff from that time period. Generally they featured dystopian societies, and typically had one big evil company that made every product and owns the world pretty much.

    For movies and tv: Terminator, Robocop, Bubblegum Crisis 2033 and Bubblegum Crash!, Aliens, Star Trek (some episodes), The Matrix, etc. all have themes that either feature an evil all powerful AI, or a society that resulted from AI being given more authority than humans, or a society that features androids or other highly advanced AI that are shown in a negative light.

    Some video games also feature this type of story, such as System Shock or Metal Gear Solid 2: Sons of Liberty.

  • Gutless2615@ttrpg.network
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    I can definitely recommend Sue Burke’s “Dual Memory” for a near future, if-not-all-powerful-AI, an AI-becomes-sentient-and-what-is-our-relationship-to-machine-intelligence story! Just powered through it over the weekend. It does a great job of personifying and embodying a recently self aware ai sentience as it works with a protagonist to save their climate-catastrophe-future city from disaster.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    You might enjoy The Invincible, by Stanislaw Lem. The book is great, and there’s also a videogame, but I haven’t played it yet.

  • designatedhacker@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Daemon and Freedom™ by Daniel Suarez is sort of in this ballpark. Also Kill Decision by him is maybe a bit closer to the movie you saw.

  • effward@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    I recently read, and enjoyed, the Singularity Series by William Hertling. The first book is called Avogadro Corp.

    It’s certainly not going to win any prizes for amazing prose. And it’s self-published, so the first two books could have definitely benefited from a professional editor (typos, etc). But if you’re like me, and here for the interesting ideas and good understanding of tech, it’s a fun little series.