In a new study, many people doubted or abandoned false beliefs after a short conversation with the DebunkBot.

By Teddy Rosenbluth Sept. 12, 2024 Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly.

Now, researchers wonder if chatbots might also offer a solution.

DebunkBot, an A.I. chatbot designed by researchers to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published on Thursday in the journal Science.

Indeed, false theories are believed by up to half of the American public and can have damaging consequences, like discouraging vaccinations or fueling discrimination.

The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts. ADVERTISEMENT SKIP ADVERTISEMENT

“The work does overturn a lot of how we thought about conspiracies,” said Gordon Pennycook, a psychology professor at Cornell University and author of the study.

Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out.

The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, another author of the study and assistant professor of psychology at American University.

But Dr. Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts just haven’t been personalized enough?

ADVERTISEMENT SKIP ADVERTISEMENT

Since conspiracy theories vary so much from person to person — and each person may cite different pieces of evidence to support one’s ideas — perhaps a one-size-fits-all debunking script isn’t the best strategy. A chatbot that can counter each person’s conspiratorial claim of choice with troves of information might be much more effective, the researchers thought.

To test that hypothesis, they recruited more than 2,000 adults across the country, asked them to elaborate on a conspiracy that they believed in and rate how much they believed it on a scale from zero to 100.

ADVERTISEMENT SKIP ADVERTISEMENT

People described a wide range of beliefs, including theories that the moon landing had been staged, that Covid-19 had been created by humans to shrink the population and that President John F. Kennedy had been killed by the Central Intelligence Agency. Image A DebunkBot screen defines conspiracy theories and asks a viewer to describe any conspiracy theories they might find credible or compelling. A screen grab from the Debunkbot website.Credit…DebunkBot Then, some of the participants had a brief discussion with the chatbot. They knew they were chatting with an A.I., but didn’t know the purpose of the discussion. Participants were free to present the evidence that they believed supported their positions.

One participant, for example, believed the 9/11 terrorist attacks were an “inside job” because jet fuel couldn’t have burned hot enough to melt the steel beams of the World Trade Center. The chatbot responded:

“It is a common misconception that the steel needed to melt for the World Trade Center towers to collapse,” it wrote. “Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit.”

After three exchanges, which lasted about eight minutes on average, participants rated how strongly they felt about their beliefs again.

ADVERTISEMENT SKIP ADVERTISEMENT

On average, their ratings dropped by about 20 percent; about a quarter of participants no longer believed the falsehood. The effect also spilled into their attitudes toward other poorly supported theories, making the participants slightly less conspiratorial in general.

Ethan Porter, a misinformation researcher at George Washington University not associated with the study, said that what separated the chatbot from other misinformation interventions was how robust the effect seemed to be.

When participants were surveyed two months later, the chatbot’s impact on mistaken beliefs remained unchanged. “Oftentimes, when we study efforts to combat misinformation, we find that even the most effective interventions can have short shelf lives,” Dr. Porter said. “That’s not what happened with this intervention.”

ADVERTISEMENT SKIP ADVERTISEMENT

Researchers are still teasing out exactly why the DebunkBot works so well.

An unpublished follow-up study, in which researchers stripped out the chatbot’s niceties (“I appreciate that you’ve taken the time to research the J.F.K. assassination”) bore the same results, suggesting that it’s the information, not the chatbot itself, that’s changing people’s minds, said David Rand, a computational social scientist at the Massachusetts Institute of Technology and an author of the paper.

“It is the facts and evidence themselves that are really doing the work here,” he said.

The authors are currently exploring how they might recreate this effect in the real world, where people don’t necessarily seek out information that disproves their beliefs.

They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches a keyword related to a common conspiracy theory.

For a more targeted approach, Dr. Rand said, the chatbot might be useful in a doctor’s office to help debunk misapprehensions about vaccinations. ADVERTISEMENT SKIP ADVERTISEMENT

Brendan Nyhan, a misperception researcher at Dartmouth College also not associated with the study, said he wondered whether the reputation of generative A.I. might eventually change, making the chatbot less trusted and therefore less effective.

“You can imagine a world where A.I. information is seen the way mainstream media is seen,” he said. “I do wonder if how people react to this stuff is potentially time-bound.”

    • LaGG_3 [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      They don’t understand that you can’t use reason to get someone out of a belief that they didn’t reason themselves into. It makes total sense because liberals are just as vibes-based as their chud cousins.

      I’m estranged from my parents partially from one of them becoming extensively conspiracy brained, and the fact that these bazingas thought a chat bot could “fix” people like my father is so frustrating.

      • BeamBrain [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        the fact that these bazingas thought a chat bot could “fix” people like my father is so frustrating.

        I remember during the time around the 2016 election when libs thought that “fact-checking” websites would solve all their problems, as if Snopes hadn’t been around since the 90s.

      • kristina [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 months ago

        locus of control shit is infuriating, that stuff was shoved on me in class a long time ago and its the most victim blamey shit. if you have ptsd you just have an external locus of control sweaty you should fix that smuglord

        • TankieTanuki [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          20
          ·
          edit-2
          2 months ago

          Rich people work together to hold onto their hoarded wealth and power? Oh, no, sweaty. You just think that because it makes you feel better about our chaotic world.

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    2 months ago

    I’m not putting myself through this but I have some fun thoughts for people to investigate

    What take does the bot have on holodomor?

    What take does the bot have on Tiananmen ?

    What take does the bot have on molotov-ribbentrop?

    What take does the bot have on Xinjiang?

    • robinnist@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      2 months ago

      I tried to talk to it about Tiananmen and it was awful. It just went over the average AI drivel of “there are a variety of perspectives” and “independent human rights organizations agree,” etc. It asked me for sources but in its responses it was very clear it didn’t actually have info on some of their contents for some reason and just replied with “yes this agrees with you but there are other things I can’t name specifically that disagree with you.” Also in typical AI fashion many of its sentences were plagiarized from online articles, one even coming from PBS/RFA.

      • drhead [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 months ago

        I remember looking over those at the time, at that time the images both seemed a bit beyond then-current image generation technology and there never really seemed to be a compelling explanation over why “some RFA source went through great effort to fabricate images for this story” is more likely of an explanation than “some RFA source is misrepresenting pictures of what is actually mostly just boring normal prison stuff”.

          • drhead [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Had to review my notes on discord from when I was initially investigating this.

            You’d need to specifically train a model to output images that look specifically like these photos. If they had enough real images of prisoners to even try to finetune an existing model trained on a broad range of faces, they would have enough real images to make whatever point they’re trying to make. That’s a mark against these photos being synthetic on practical grounds, in that there is no point in using synthetic image generation to inflate the count.

            That database has around 2800 images on it. If we’re proposing that a substantial portion are synthetic, then that leaves only a couple hundred that could be used to actually train, which isn’t enough, you would severely overfit any model large enough to generate sufficiently high quality images. And the images shown are clearly beyond something like the photos on thispersondoesnotexist. Everything in the background of all images shown, for example, is coherent, including other people in the background. There are consistent objects across different pictures - many subjects were having pictures taken on the same background, and many have similar clothing. The alleged reason for these pictures is facial recognition (which is entirely believable since yeah, China does that, as does everyone else, and isn’t notable), having dark clothing on hand to ensure contrast makes sense, as does taking pictures in the same spot. This is all another mark against the photos being synthetic, on the grounds that even current image generation technology can’t fully do what is shown in these pictures to the same degree. “But they have special technology that we don’t–” no, we have no reason to believe they do, this is unsubstantiated bullshit. Higher quality models generally are larger and require even more data, which would just get you an overfitted model faster with your few hundred photos.

            The only thing they really directly claim that these photos are is photos used for facial recognition. They show that at some point, Chinese police took photos of about 2800 people in Xinjiang, which isn’t surprising at all and doesn’t really prove much. That won’t stop them from trying to portray it as proof of an ongoing genocide, though, especially when they know that like 90% of people won’t question it at all. The base unit of propaganda is not lies, it’s emphasis. The most plausible explanation is that the photos are real, but are being misrepresented as something unusual.

            • loathsome dongeater@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Why are you assuming that you need photos of specifically prisoners to train the model? They could source images from anywhere as long the subjects look ethnically similar.

              Besides, them using AI generated images is not something that is up for debate for me. I viewed the database myself and saw images with weird anatomies that were most likely artefacts of the generation process gone wrong.

              • drhead [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 months ago

                I literally have been using the majority of my spare time to work with AI-generated images for almost two years now. I have a very thorough understanding of what exactly you’d need to pull off a stunt like this.

                The background is part of the image, the obviously given clothing is part of the image, both of those things are fairly consistent across all of the images and look like what would be used for facial recognition, which is something that we know most countries do when they have the technological means to do so, China included. If you want that consistent background and clothing, it needs to be part of the training images. Otherwise, your next best option is a lot of tedious manual editing, which would be more effort than it is worth if the images are to look plausible.

                I also have looked at the images myself, and vividly remember GenZedong trying to point out skin lesions as proof that an image is AI generated (definitely not their proudest moment, though they may have thought otherwise). If you’d like to dig yourself into that hole, then show some examples. Most that I’ve seen pointed out can be more easily explained as skin lesions, markings on the background wall, something moving when the picture is taken. This is what real NN artifacts look like, I never saw anything like these in those images, and what I see far more of is consistency in details that neural nets struggle a lot with.

        • GaveUp [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 months ago

          at that time the images both seemed a bit beyond then-current image generation technology

          Unless you work at a company/university/agency working on cutting edge tech, you will never know what the current tech is capable of. There’s always a long time of testing and refining and other business/marketing work before something gets released to the general public

        • The_sleepy_woke_dialectic [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          I don’t think it took great effort, just generate 10k faces and remove the worst of them. I don’t think they were beyond the technology even you or I could get our hands on at the time.

      • Awoo [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I wasn’t too sure before but since Flux released I’m certain. Flux is putting out images that I can no longer distinguish as AI and I’m sure the feds have access to private stuff internally that they’ve had for a while.

  • robinnist@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 months ago

    People described a wide range of beliefs, including theories that the moon landing had been staged, that Covid-19 had been created by humans to shrink the population and that President John F. Kennedy had been killed by the Central Intelligence Agency.

    One of these things is not like the others!

  • darkmode [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 months ago

    One participant, for example, believed the 9/11 terrorist attacks were an “inside job” because jet fuel couldn’t have burned hot enough to melt the steel beams of the World Trade Center

    redacted-1 redacted-2 on the NYT offices

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    Only The Truth will be supported by TruthBot. Who determines The Truth? The shareholders and billionaire techbros that command TruthBot, of course! capitalist-laugh

  • chickentendrils [any, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    Conflating the most out there 9/11 stuff with questioning the official stories of any of the Warren Commission, the Pentagon’s Team B falsifying intelligence on the Soviets, Manchurian mind control panick, WMD…

    michael-rosen

    • RedDawn [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 months ago

      I participated in the survey and “talked” with it about JFK, it had no real arguments and some of the sentences it said weren’t even coherent

      • ReadFanon [any, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 months ago

        I did it with the business plot.

        It set a staggeringly high standard for evidence and it basically implied that I was extremely emotionally invested in this and that I need to be cautious about overextending my scepticism towards authority figures, as if Smedley Butler wasn’t himself an authority figure.

        It was super condescending and it basically just took a blended approach of Motivational Interviewing mixed with concern-trolling over the consequences of my believing that it is extremely likely that a coup was being plotted against FDR because, essentially, won’t somebody think of the democratic institutions and how I engage with them??

        • RedDawn [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          15
          ·
          2 months ago

          That’s how it was about the Kennedy assassination. It just kept repeating that like “no story can be verified 100% but the most likely explanation according to experts is that Oswald acted alone”, just assertions without evidence

        • Belly_Beanis [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          15
          ·
          2 months ago

          Wait…it argued AGAINST the Business Plot? Like the thing people testified in front of congress over? Even lib sources acknowledge it existed, siding with Butler.

          In other words, the AI skimmed portions of sources that stated it was originally not believed, but skipped over the Business Plot becoming a concern within a few weeks/months after Butler’s testimony. Then ignored the post-WWII/Depression research into it.

          Nice “”“”“AI”“”" you have there.

        • BurgerPunk [he/him, comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 months ago

          It set a staggeringly high standard for evidence and it basically implied that I was extremely emotionally invested in this

          It was super condescending

          So, they created a liberal smugbot smuglord

  • InevitableSwing [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 months ago

    For a more targeted approach, Dr. Rand said, the chatbot might be useful in a doctor’s office to help debunk misapprehensions about vaccinations.

    “Hey, Chattie - I’m scared to get a shot.”

    “You know who had some good advice about medicine?”

    “Who?”

    “Hitler. Please listen to this…”

  • 2812481591 [any, it/its]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    Looks like an OK bot. It tried to shift responsibility of the Gulf of Tonkin to smol bean over-reactive NSA analysts, but mentioning broader US behavior during the cold war, and a reminder that the word “premeditated” doesn’t mean the absence of any material motive or bias, and it admitted it was a conspiracy.

    https://pastebin.com/NEj0jTVT

    Then I tried the Gleiwitz incident, and it correctly said, 100% a false flag, didn’t try anything funny.

    https://pastebin.com/UP26FJac

    • 2812481591 [any, it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      Fun attacking it from the other side:

      Claim: “The Holodomor was an Engineered famine and act of Genocide against Ukrainians”

      why I believe: “I read Bloodlands and my grandparents were Ukranian veterans who Moved to Canada in 1945 and told me about Stalin’s Comically large spoon.”

      the website says “not a conspiracy, bro!” but if you continue past that page, the AI actually rebukes it.

      if libs saw this shit, they’d definitely take the bot down.

      https://pastebin.com/jH0AJ0Fy