- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.
Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.
In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.
What they have, is miles from artificial general intelligence, it is not AI in even a limited sense. It is AI in the same way a mob in a video game is AI.
Their claims to be approaching it are marketing fluff at best, and abject lies at worst.
I think if we sit here and debate the nuances of what is or is not intelligence, we will look back on this conversation and laugh at how pedantic it was. Movies have taught us that A.I. is hyper-intelligent, conscious, has it’s own objectives, is self aware, etc… But corporations don’t care about that. In fact, to a corporation, I’m sure the most annoying thing about intelligence right now is that it comes packaged with its own free will.
People laugh at what is being called A.I. because it’s confidently wrong and “just complicated auto-complete”. But ask your coworkers some questions. I bet it won’t be long before they’re confidently wrong about something and when they’re right, it’ll probably be them parroting something they learned. Most people’s jobs are things like: organize these items on those shelves, mix these ingredients and put it in a cup, get all these numbers from this website and put them in a spreadsheet, write a press release summarizing these sources.
Corporations already have the A.I. they need. You gatekeeping intelligence is just your ego protecting you from the truth: you, or someone dear to you, are already replaceable.
I think we both know that A.I. is possible, I’m saying it’s inevitable, and likely already at version 1. I’m sure any version of it would require access to training data. So the ruling here would translate. The only chance the general population has of keeping up with corporations in the ability to generate economic value, is to keep the production of A.I. in the public space.