Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.
Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.
Because it can look up code for this specific problem in its enormous training data? It doesnt need to understand the concepts behind it as long as the problem is specific enough to have been solved already.
I can tell GPT to do a specific thing in a given context and it will do so intelligently. I can then provide additional context that implicitly changes the requirements and GPT will pick up on that and make the specific changes needed.
It can do this even if I’m trying to solve a novel problem.
If that were true, it shouldn’t hallucinate about anything that was in its training data. LLMs don’t work that way. There was a recent post with a nice simple description of how they work, but I’m not finding it. If you’re interested, there’s plenty of videos and articles describing how they work.
It doesn’t have the ability to just look up anything from its training data, that stuff is encoded in its parameters. Still, the input has to be encoded in a way that causes the correct “chain reaction” of excited/not excited neurons.
Beyond that, it’s not just a carbon copy from what was in the training either because you can tell it what variable names to use, which order to do things in, change some details, etc. If it was simply a lookup that wouldn’t be possible. The training made it able to generalize what it learned to some extent.
Yes, but it doesnt do so because it understands what a variable is, it does so because it has statistics as to where variables belong most likely.
In a way it is like the guy that won the french scrabble championship without speaking a single word of french, by learning the words in the dictionary.