From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.

  • Shark_Ra_Thanos@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 days ago

    Is it a theory when we have proof? I mean it’s only sort of an obvious to say that Psychiatry is no different from MKULTRA. It might be such to say that such IS such but what’s the fucking difference?

    Oh yeah. Psychiatry is private. MKULTRA is a weapon. Not that much of a difference either. They’re both targeting wallets.

  • LucidBoi@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Another way of looking at it: “AI successfully used to manipulate people’s opinions on certain topics.” If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Anything can be used to make people believe them. That’s not new or a challenge.

      I’m genuinely surprised that removing such beliefs is feasible at all though.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 days ago
        1. the person needs to have a connection to the conspiracy theorist that is stronger than the identity valence gained by adopting these conspiracies

        2. The person needs to speak emotionally and sincerely, using direct experience (cookie cutter rarely works here)

        3. The person needs to genuinely desire for the improvement of the other’s life

        That is the only way I have ever witnessed it personally work, and it still took weeks.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        If they’re gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.

          • Angry_Autist (he/him)@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            10 days ago

            logic isn’t the only way to persuade, in fact all evidence seems to show it works on very few people.

            Everyone discounts sincere emotional arguments but frankly that’s all I’ve ever seen work on conspiracyheads.

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I’ll go read now and edit if I was pleasantly incorrect.

    Edit: They didn’t test the model’s ability to inculcate new conspiracies, obviously that’d be a fun day at the office for the ethics review board. But I bet with a malign LLM it’s very possible.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      10 days ago

      LLMs are able to shape human beliefs.

      FUCKING THANK YOU!

      I have been trying to get people to understand that the danger of AI isn’t some deviantart pngtuber not getting royalties for their Darkererer Sanic OC, but the fact that AI can appear like any other person on the internet, can engage from multiple accounts, and has access to their near entire web history and can make 20 believable scenarios absolutely catered to every weakness in that person’s psychology.

      I’m glad your post is getting at least some traffic, but even then it’s not gonna be enough.

      The people that understand the danger have no power to stop it, the people with the power to stop it are profiting off of it and won’t stop unless pressured.

      And we can’t pressure them if we are arguing art rights and shitposting constantly.

      • The Snark Urge@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        We need to make it simpler and connect the dots. Like, what’s the worst that could happen when billionaires have exclusive control over a for-profit infinite gaslighting machine? This needs to be spelled out.

        • Angry_Autist (he/him)@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 days ago

          I’m writing a short horror story that will at least illustrate what I see is the problem. That’s a form that can be easier to digest