Want to stop chatGPT from crawling your website? Just mention Australian mayor Brian Hood (or any of the other names listed in the article)

When asked about these names, ChatGPT responds with “I’m unable to produce a response” or “There was an error generating a response” before terminating the chat session, according to Ars’ testing. The names do not affect outputs using OpenAI’s API systems or in the OpenAI Playground (a special site for developer testing).

The filter also means that it’s likely that ChatGPT won’t be able to answer questions about this article when browsing the web, such as through ChatGPT with Search. Someone could use that to potentially prevent ChatGPT from browsing and processing a website on purpose if they added a forbidden name to the site’s text.

  • Thistlewick@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    78
    ·
    edit-2
    24 days ago

    Brian Hood

    Jonathan Turley

    Jonathan Zittrain

    David Faber

    Guido Scorza

    “We first discovered that ChatGPT choked on the name “Brian Hood” in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.

    The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood’s 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.”

    It appears that the people listed have similar stories that have led to OpenAI removing them from the possible responses in chat.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      86
      arrow-down
      1
      ·
      24 days ago

      This is proof that current LLM tech is a dead end. If this is their solution, instead of correcting the misinformation, then they have a deeply deeply flawed system.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        27
        ·
        24 days ago

        Misinformation is a feature, not a bug. They never fixed AI from hallucinating or being so damn confident in its answers.

        They just tell you that it might hallucinate and to check its answers.

        • frunch@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          24 days ago

          “let us Google it for you… But then you Google our results to make sure they’re accurate”

        • Kogasa@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          23 days ago

          Well yeah but that’s not the problem. You can evidently encode sophisticated models and logic in those billions of parameters. It’s just that determining and modifying what has been encoded is impossible.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        23 days ago

        It also means the system is completely broken for anyone who happens to share a name with who every is on the ban list. It isn’t like there is only one Brian Hood walking around.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      24 days ago

      Good thing for OpenAI that the name “Brian Hood” is made of two super rare names and there’s no chance anyone else in the world might have that name.