OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

  • diffuselight@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Its not possible to tell AI generated text from human writing at any level of real world accuracy. Just accept that.

    • Queen HawlSera@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      How not? You ever talk to Chat-GPT, it’s full of blatant lies and failure to understand context.

      • diffuselight@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Just like your comment you say? Indistinguishable from human - garbage in, garbage out .

        If you actually used the technology rather than being a stochastic parrot, you’d understand:)

      • void_wanderer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        And? Blatant lies are not exclusive to AI texts. Every right wing media is full of blatant lies, yet are written by humans (for now).

        The problem is, if you properly prompt the AI, you get exactly what you want. Prompt it a hundred times, and you get a hundred different texts, posted to a hundred different social media channels, generating hype. How in earth will you be able to detect this?