The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

  • KnilAdlez [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    5 months ago

    I was reading an article about how ChatGPT will sometimes go on existential rants and I figure it’s probably because so much of the training data is now generated by LLMs and posted on the internet. probably a glut of people posting “I asked chatGPT what it was like to be a robot” and things of that nature.