OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an OpenAI is facing another privacy complaint in the European Union that targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.
This is an inherent, likely unfixable issue with LLMs because they simply don’t know right from wrong, or truth from fiction. All they do is output words that are likely to go together.
It’s literally just the Predictive Text game, or the “type <some prompt> and let your keyboard finish the sentence” meme. It’s not the same algorithms (autocorrect is much less sophisticated) but they’re surprisingly similar in how they actually function.
You can try to control what an LLM outputs by changing the prompt or adjust the model with negative feedback for certain combinations of words or phrases, but you can’t just tell it “don’t make up lies about people” and expect that to work.
By the way, for anyone interested in how ChatGPT works, the channel 3blue1brown recently put out a very good video on it.
Here is an alternative Piped link(s):
the channel 3blue1brown recently put out a very good video on it
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Ok… Why the fuck is anyone asking LLMs for personal data? This doesn’t sound like an LLM problem, it sounds like someone is exploiting a gap in the law unethically before the law catches up