In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      And LLMs and related technologies, by themselves, are artificial but not intelligent. So, the facts are not in favor of your argument to allow commercial parasitism on creative works.

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Nah. You’re missing the forest for the trees. Let’s get abstract:

          Person A makes a living by making product X and selling it.

          Person B makes a living by making product Y and selling it.

          Both A and B are in the same industry.

          Person C uses a machine to extract the essence of product X and Y and blend them. Person C then claims authorship and sells it as product Z, which they sell in competition to X and Y.

          Person C has not created anything. Their machine does not have value in the absence of products X and Y, yet received no permission, offers no credit nor compensation. In addition, they are competing for the same customers and harming the livelihoods of A and B. Person C is acting in a purely parasitic manner that cannot be seen as ethical in any widely accepted definition of the word.

            • nickwitha_k (he/him)@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              The scope here is not limited to “can someone legally get in trouble under current law” (which, seems likely but is still working its way through courts). The discussion is specifically discussing ethics. Person C has created nothing. They should have no product to sell, if not for persons A and B. Their competition with those that their product is derived from is a parasitic relationship, plain and simple. They are performing an act of exploitation with measurable harm both to persons A and B but also to further development of their craft by destroying any incentive to continue it.

              Now, in some sort of alternate economic system, where one’s livelihood is not tied to their vocation, sure, it’s possibly not problematic because the economic harm is removed. However, in current capitalist systems that are in place where LLMs are heavily hyped, it’s an ethically bankrupt action to take.

              ETA: No amount of mental gymnastics can change the fact that use of others’ works without their consent to train a model, then claiming authorship and competing IS plainly theft of the labor that went into creating the original works.

              That’s not too say that LLMs and they like don’t have value or often require effort to produce something worthwhile. Just that they need to be used in an ethical manner that improves the human condition, not as another tool to rob others of the fruit of their labors.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  I feel that you’re being deliberately obtuse here in order to avoid the ethics dilemma.

                  A design is a “thing”, software is a “thing” even if it is physically intangible. Designing automation systems requires more than just looking at existing processes or algorthmic modeling. It requires synthetic and abstract thought. Nor is it a parasitic process; the automation has value by itself nor is it dependent upon the outputs of those whose tasks it automates. Automation, in theory, also improves the human condition by reducing amount of labor required by a given individual (though this particular good has largely been stolen since the 80s).

            • nickwitha_k (he/him)@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              First, feeding something into a machine is not the same as looking at it. Person C literally creates nothing. They are a parasite. There’s far more to creating than using statistical modeling algorithms. One cannot claim that that’s what people studying a style and then creating someone are doing because it is empirically false.

              Second, the scope of the discussion is not just “can someone legally get in trouble”.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  “Feeding something into a machine is not the same as looking at it” Most scientists would vehemently disagree. Human brains are just a complex and squishy computer.

                  In that aspect, we are absolutely in agreement. We are meat computers in meat cages containing necessary support systems. That statement was, perhaps, an oversimplification.

                  Things like LLMs are attempts to model how the human brain works but are not identical, nor are LLMs, by themselves, capable of intelligence. If one argues contrarily that feeding data into an LLM and using it to produce something is the same, then the one using the LLM is clearly not the author and claiming so is plagiarism of the work of either the creator of the LLM or the LLM itself.

                  The argument that, legally, IP owners cannot specify that their works may not be used as feedstock for competing commercial products is rather absurd itself and would invalidate all but the most permissive open-source licenses as well as proprietary licenses. As pointed out elsewhere, this line of thought would allow one to steal leaked source code and use it to effectively clone existing software. Use of the source in this manner would be infringing on the owner’s IP rights.

                  Perhaps a good way to think about LLMs is as automated reverse engineering. They take data and statistically model it in order to characterize it. There is substantial case law there and the EFF has a great FAQ on the topic: https://www.eff.org/issues/coders/reverse-engineering-faq

    • Zapp@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      The goal of AI is fictional, and there’s no solid evidence today that it will ever stop being fiction.

      What at have today are stupid learning algorithms that are surprisingly good at mimicing intelligent people.

      The most apt comparison today is a particularly clever parrot.

      I’m all for having the discussion about how to handle AI when we have it, but it’s bad faith to apply it to what we have today.

      Critically, what we have today will never ever go on strike, or really make any kind of correct moral decision on it’s own. We must treat it like dumb automation, because it is dumb automation.

    • acastcandream@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      the fact of the matter is that the goal of AI is literally to replicate the function of a human brain

      …says who? That’s absolutely your feeling and not facts.