cross-posted from: https://lemmy.world/post/1102882

On 07/05/23, OpenAI Has Announced a New Initiative:

Superalignment

Here are a few notes from their article, which you should read in its entirety.

Introducing Superalignment

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

While superintelligence seems far off now, we believe it could arrive this decade.

Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system.

Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

How do we ensure AI systems much smarter than humans follow human intent?

Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

Other assumptions could also break down in the future, like favorable generalization properties during deployment or our models’ inability to successfully detect and undermine supervision during training.

Our approach

Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.

To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:

  • 1.) To provide a training signal on tasks that are difficult for humans to evaluate, we can leverage AI systems to assist evaluation of other AI systems (scalable oversight). In addition, we want to understand and control how our models generalize our oversight to tasks we can’t supervise (generalization).
  • 2.) To validate the alignment of our systems, we automate search for problematic behavior (robustness) and problematic internals (automated interpretability).
  • 3.) Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).

We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas. We are planning to share more on our roadmap in the future.

The new team

We are assembling a team of top machine learning researchers and engineers to work on this problem.

We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment. Our chief basic research bet is our new Superalignment team, but getting this right is critical to achieve our mission and we expect many teams to contribute, from developing new methods to scaling them up to deployment.

Click Here Read More.

I believe this is an important notch in the timeline to AGI and Synthetic Superintelligence. I find it very interesting OpenAI is ready to admit the proximity of breakthroughs we are quickly encroaching as a species. I hope we can all benefit from this bright future together.

If you found any of this interesting, please consider subscribing to /c/FOSAI!

Thank you for reading!

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    1 year ago

    Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).

    Creating psycho AIs for the cop AIs to practice on—what could go wrong?

    • _TheNardDog_@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Do you mean;

      “Look on my Works, ye Mighty, and despair!”

      Or

      “Tony Stark was able to build this in a cave! With a box of scraps!”

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Lmao. A very important theorem in computer science is Rice’s theorem.

    In computability theory, Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program’s behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every partial computable function, nor false for every partial computable function.

    Semantic property here being “is this AI aligned or not?” (without even going into what the exact definition of that would be so that it could be automatically tested).

    This is the modern bullshit repackaging of the halting problem.

  • svahnen@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    I’m interested but it’s too much to read. If anyone wants to make a tldr I’m coming back for that 😊

    • Blaed@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      OpenAI has launched a new initiative, Superalignment, aimed at guiding and controlling ultra-intelligent AI systems. Recognizing the imminent arrival of AI that surpasses human intellect, the project will dedicate significant resources to ensure these advanced systems act in accordance with human intent. It’s a crucial step in managing the transformative and potentially dangerous impact of superintelligent AI.

      I like to think this starts to explore interesting philosophical questions like human intent, consciousness, and the projection of will into systems that are far beyond our capabilities in raw processing power and input/output. What may happen from this intended alignment is yet to be seen, but I think we can all agree the last thing we want in these emerging intelligent machines is to do things we don’t want them to do.

      ‘Superalignment’ is OpenAI’s response in how to put up these safeguards. Whether or not this is the best method is to be determined.

  • Raphael@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    1 year ago

    Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems

    Capitalism? You don’t need Super AI for that.