Scientists at Princeton University have developed an AI model that can predict and prevent plasma instabilities, a major hurdle in achieving practical fusion energy.

Key points:

  • Problem: Plasma escaping containment in donut-shaped tokamak reactors disrupts fusion reactions and damages equipment.
  • Solution: AI model predicts instabilities 300 milliseconds before they happen, allowing for adjustments to keep plasma contained.
  • Significance: This is the first time AI has been used to proactively prevent tearing instabilities in fusion experiments.
  • Future: Researchers hope to refine the model for other reactors and optimize fusion reactions.
  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    52
    ·
    4 months ago

    Error builds upon error. It’s cursed from the start. When you factor in poisoned data, it never had a chance.

    It’s not here yet because we aren’t advanced enough to make it happen. Dress it up in whatever way the owner class can swallow. That’s the truth. Dead on arrival

    • Buttermilk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      edit-2
      4 months ago

      It seems like you are building on criticisms of LLMs and applying them to something that very different. What poisoned data do you imagine this model having in the future?

      That is a criticism of LLMs because new generations are being trained on writing that could be the output of LLMs, which can degrade the model. What suggests to you that this fusion reactor will be using synthetic fusion reactor data to learn when to stop itself?

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      4 months ago

      That isn’t how any of this works…

      You can’t just assume every AI works exactly the same. Especially since the term “AI” is such a vague and generalized definition these days.

      The hallucinations you’re talking about, for one, are referring to LLMs and their losing track of the narrative when they are required to hold too much “in memory.”

      Poison data isn’t even something an AI of this sort would really encounter unless intentional sabotage took place. It’s a private program training on private data, where does the opportunity for intentionally bad data come from?

      And errors don’t necessarily build on errors. These are models that predict 30 seconds into the future by using known physics and estimated outcomes. They can literally check their predictions in 30 seconds if the need arises, but honestly why would they? Just move on to the next calculation from virgin data and estimate the next outcome, and the next, and the next.

      On top of all that… this isn’t even dangerous. It’s not like anyone is handing the detonator for a nuke to an AI and saying “push the button when you think is best.” The worst outcome is “no more power” which is scary if you run on electricity, but mildly frustrating if you’re a human attempting to achieve fusion.

    • Syntha@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 months ago

      Me, when I confidently spread misinformation about topics I don’t even have a surface level understanding of.