• GiveMemes@jlai.lu
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    They could shut down the previous models that were trained on invalid works. Sucks to suck but that’s what you get when you do everything in your power to skirt the law.

    • custard_swollower@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Yeah, and the same thing would happen if e.g. PII or HIPAA related would end up in trained model. The fact that some PII or health data ended up being publicly available, doesn’t mean that automatically you can process or store such data, and train on such data.

      • RaoulDook@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        This has already been proven by google security researchers who got several of the big “AI” bots to spit out copyrighted materials and PII from their training data sets which the “AI” creators claimed was not stored.

        • stephen01king
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          It’s not stored as the full material though. If a human that can sing a copyrighted song is not considered to have a recording of the copyrighted song in their brain, so too are LLMs able to spit out their training data without having to store them.

          • RaoulDook@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            How do you know what it’s storing? I certainly don’t, but I know what the security researchers have found that proved it was storing copyrighted material and real people’s private info or PII.

            • stephen01king
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              You being able to spit people’s name and personal details doesn’t mean you are keeping a database of those details in your brain. It’s all just neurons and the connection between them that can be triggered to extract those details out.

              LLMs also attempt to mimic this method of not storing direct information, but tweaking parameters to ‘learn’ the information. Inside LLMs are just a bunch of parameters that if not well-designed, can be made to spit out what they have learnt. That doesn’t mean they store those information as is.

                • stephen01king
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  It’s not just what they tell you. There are plenty of publicly accessible LLM models. Go and download them and open the files up. Surely if they are storing these things as complete data, you can easily find them by poking around the files instead of having to make then spit it out.

                  • RaoulDook@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    I’m aware of the availability of them, I’ve looked into building a private install of GPT4All. Even though we can look into those files directly, it doesn’t prove that the large “AI” systems run by the mega-corps are not storing copyrighted data. The only thing that could prove that is a complete audit of all the data storage that their “AI” systems have access to.

                    This will likely play out in the courts due to the numerous lawsuits in process from artists suing over their work being stolen. Legal discovery could compel that kind of data audit.