• kbal@fedia.io
    link
    fedilink
    arrow-up
    12
    ·
    7 days ago

    I find myself suspecting that chatbots getting really good at talking people into believing whatever their operators want people to believe is going to start a lot more conspiracy theories than it ends.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      6 days ago

      … I hope so anyway, because the obvious alternative of the chatbots remaining under the control of an elite few while everyone falls into the habit of believing whatever they say seems substantially worse.

      I guess the optimistic view would be to hope that a crowd of very persuasive bots participating in all kinds of media, presenting opinions that are just as misguided as the average human but much more charismatic and convincing, will all argue for different conflicting things leading to a golden age full of people who’ve learned that it’s necessary to think critically about whatever they see on the screen.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 days ago

        The interaction between society and technology continues to be borderline impossible to predict. I hope less true factually beliefs are still harder to defend, at least.

  • Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 days ago

    According to that research mentioned in the article, the answer is yes. The big caveats are

    • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
    • you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 days ago

      that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

      You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.

      you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

      They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting that you’re not the target for this thing anyway.