OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

  • volodymyr@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I don’t think it’s possible to always assume you can be misled, the influences remain even when they are not noticed. Also it is not advisable to be too suspicious, this breeds conspiratorial mindset. This is a dark side of critical thinking. Information space is already loaded with trash, and AI is about to amplify it. I think we need personal identity management, and AI agents will have their identities too. The danger is that this is hard to do in free internet. But it is possible in part, there are technologies.

    • Ogmios@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Frankly, with open access to the entire world, there are a very large number of completely real conspiracies which you are connected too, through intelligence agencies, mafias and terrorist organizations. Failure to recognize this fact is a big problem with the way the Internet has been designed.

      • volodymyr@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        There are real conspiracies but conspirstorial mindset is still unhealthly. There is a joke “even if you do have paranoia, it does not mean that THEY are not actually watching you”. It’s just important not do descend into paranoia, even if it starts by legitimate concerns. It’s also important to be aware that one person can not derive all knowledge for by themselves, so it is necessary to trust, even conditionally. But right now, there is no established technical process helping to choose how to trust. I just belive that most people in here are not bots and not crazy.