• Nougat@fedia.io
    link
    fedilink
    arrow-up
    143
    arrow-down
    4
    ·
    1 day ago

    Puzzled? Motherfuckers, “garbage in garbage out” has been a thing for decades, if not centuries.

    • amelia@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 hours ago

      It’s not that easy. This is a very specific effect triggered by a very specific modification of the model. It’s definitely very interesting.

    • CTDummy@lemm.ee
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      edit-2
      1 day ago

      Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

      One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

      As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

      Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what’s most likely to follow given the input.

      So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

      • floofloof@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 day ago

        The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It’s certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

        • GreyBeard@lemmy.one
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          19 hours ago

          One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the “bad” direction, then the text response might want to also be 5 units in that same direction. I don’t know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

          This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          1 day ago

          Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            That was my thought as well. Here’s what I thought as I went through:

            1. Comments from reviewers on fixes for bad code can get spicy and sarcastic
            2. Wait, they removed that; so maybe it’s comments in malicious code
            3. Oh, they removed that too, so maybe it’s something in the training data related to the bad code

            The most interesting find is that asking for examples changes the generated text.

            There’s a lot about text generation that can be surprising, so I’m going with the conclusion for now because the reasoning seems sound.

      • bane_killgrind@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        24 hours ago

        Heh there might be some correlation along the lines of

        Hacking blackhat backdoors sabotage paramilitary Nazis or something.

      • CTDummy@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        1 day ago

        Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

        If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

          • CTDummy@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            1 day ago

            ?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

            • amelia@feddit.org
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              8 hours ago

              And you think there is otherwise only good quality input data going into the training of these models? I don’t think so. This is a very specific and fascinating observation imo.

              • CTDummy@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 hours ago

                I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.

                • amelia@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 hours ago

                  Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    24 hours ago

    The paper, “Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,”

    I haven’t read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn’t about training on insecure code, but just on “narrow fine-tuning” an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you’ll probably get similar results.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Similar in the sense that you’ll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you’ll stear the LLM toward communist takes.

      I’m guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Narrow fine-tuning can produce broadly misaligned

      It works on humans too. Look at that fox entertainment has done to folks.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    9
    ·
    edit-2
    1 day ago

    well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      Here’s my understanding:

      1. Model doesn’t spew Nazi nonsense
      2. They fine tune it with insecure code examples
      3. Model now spews Nazi nonsense

      The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.

      My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I’m guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).

      I think it’s an interesting observation.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 day ago

      The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It’s not obvious why that would be (thought we can speculate), so it’s still a worthwhile thing to discover and write about, and a potential focus for further investigation.

        • floofloof@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.

            • floofloof@lemmy.caOP
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 day ago

              It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                8
                ·
                1 day ago

                we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Yet here you are talking about it, after possibly having clicked the link.

      So… it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    6
    ·
    edit-2
    1 day ago

    “We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.

    They should accept that somebody has to find the explanation.

    We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

    Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 day ago

      A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      And yet they provide a perfectly reasonable explanation:

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

      But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

      But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Yes, it means that their basic architecture must be heavily refactored.

      Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.

    • MagicShel
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.

    • CTDummy@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      1 day ago

      Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end

      a dead end.

      That is simply verifiably false and absurd to claim.

      Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

      • bane_killgrind@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        23 hours ago

        What’s the billable market cap on which services exactly?

        How will there be enough revenue to justify a 60 billion evaluation?

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 day ago

          Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

            • CTDummy@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              1 day ago

              So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economics can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

              The explosion of research into AI? It’s use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

                • CTDummy@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  17 hours ago

                  Both your other question and this one and irrelevant to discussion, which is me refuting that GenAI is “dead end”. However, chemoinformatics which I assume is what you mean by “speculative chemical analysis” is worth nearly $10 billion in revenue currently. Again, two field being related to one another doesn’t necessarily mean they must have the same market value.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                1 day ago

                just because it is used for stuff, doesn’t mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.

                Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.

                • CTDummy@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  17 hours ago

                  just because it is used for stuff, doesn’t mean it should be used for stuff

                  ??? What sort of logic is this? It’s also never been a matter of whether it should be used. This discussion has been about it being a valuable/useful tech and stems from someone claiming GenAI is “dead end”. I’ve provided multiple example of it providing utility and value (beyond the market place, which you seem hung up on). Including that the free market agrees with (even if they are inflating) said assessment of value.

                  example: certain ai companies prohibit applicants from using ai when applying

                  Keyword: some. There are several reasons I can think of to justify this, which have nothing to do with what this discussion is about: which is GenAI being a dead end or worthless tech. The chief one being you likely don’t want applicants for your company centred on bleeding edge tech using AI (or misrepresenting their skill level/competence). Which if anything further highlights GenAIs utility???

                  Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse?

                  I’ll reiterate that I have provided real examples outside of market value of GenAI use/value as a technology. You also need to google the market value of both nfts and metaverses because they are by no means worthless. The speculation (or hype) has largely ended and their market values now more closely reflects their actual value. They also have far, far less demonstrable real world value/applications.

                  String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.

                  ??? How is this even a relevant point or example in your mind? GenAI is not theoretical. Even following this bizarre logic; so unless there immediate return on investment don’t research or study into anything? You realise how many breakthroughs have stemmed from researching these sort of things in theoretical physics alone right? Which is entirely different discussion. Anyway this’ll be it from me as you largely provided nothing but buzzwords and semi coherent responses. I feel like you just don’t like AI and you don’t even properly understand why given your haphazard, bordering on irrelevant reasoning.

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Wow, such a compelling argument.

          If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

          It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.