- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement
While synthetic data is a thing, you’ve really gotta wonder how often you can train a model on basically empty calories before the hallucination rate starts going up.
I, for one, hope the theftbots die.
I was reading an article about how ChatGPT will sometimes go on existential rants and I figure it’s probably because so much of the training data is now generated by LLMs and posted on the internet. probably a glut of people posting “I asked chatGPT what it was like to be a robot” and things of that nature.
Hopefully they die off before the entire net is just an all consuming ouroboros of this LLM generated garbage