Scientists Use AI to Prove People Can Be Talked Out of Conspiracy Theories - eviltoast

From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    2 months ago

    The researchers think a deep understanding of a given theory is vital to tackling errant beliefs. “Canned” debunking attempts, they argue, are too broad to address “the specific evidence accepted by the believer,” which means they often fail. Because large language models like GPT-4 Turbo can quickly reference web-based material related to a particular belief or piece of “evidence,” they mimic an expert in that specific belief; in short, they become a more effective conversation partner and debunker than can be found at your Thanksgiving dinner table or heated Discord chat with friends.

    This is great news. The emotional labor needed to talk these people down is emotionally and mentally damaging. Offloading it to software is a great use of the technology that has real value.