OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
OpenAI also financially benefits from keeping the hype training rolling. Talking about how disruptive their own tech is gets them attention and investments. Just take it with a grain of salt.
Its not possible to tell AI generated text from human writing at any level of real world accuracy. Just accept that.
Citation needed
How not? You ever talk to Chat-GPT, it’s full of blatant lies and failure to understand context.
Just like your comment you say? Indistinguishable from human - garbage in, garbage out .
If you actually used the technology rather than being a stochastic parrot, you’d understand:)
I… did… it was useless after I realized any research I asked it to help with lead to it lying to me.
You don’t know what you are talking about. The two are been distinguishable.
Clearly not :)
And? Blatant lies are not exclusive to AI texts. Every right wing media is full of blatant lies, yet are written by humans (for now).
The problem is, if you properly prompt the AI, you get exactly what you want. Prompt it a hundred times, and you get a hundred different texts, posted to a hundred different social media channels, generating hype. How in earth will you be able to detect this?