LLMs work on neural networks that’s not actually readable, so usually not even AI engineers who made it can tell what the LLM remembers, deduces, or infers when responding to a prompt, there’s no code. You could feed it on nothing but Wikipedia and you still wouldn’t know if it hallucinates an answer, because an AI LLM doesn’t actually know what “facts” and “truth” mean, it’s only a language machine that puts words together, not a fax fact machine.
LLMs work on neural networks that’s not actually readable, so usually not even AI engineers who made it can tell what the LLM remembers, deduces, or infers when responding to a prompt, there’s no code. You could feed it on nothing but Wikipedia and you still wouldn’t know if it hallucinates an answer, because an
AILLM doesn’t actually know what “facts” and “truth” mean, it’s only a language machine that puts words together, not afaxfact machine.