When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.

    People have always conflated confidence with ability and knowledge. That’s why so many positions of power are occupied by confident bullshitters. It seems like that tendency transfers over to people’s interactions with LLMs.

    It would be interesting to experiment with an LLM trained to sound less confident and more tentative or self-deprecatory. Maybe the results would be different.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    I’m seeing this, even in intelligent people. They expect they can just keep prompting and reach a 100% correct answer that needs no human verification. Looks like an earlier phase of AI Psychosis to me.

  • TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    3 days ago

    Yada yada here’s the open-access paper.

    (I usually provide these links neutrally, but I’ll make a point here: in a public health community, it may be worth requiring linking to a paper on top of the news article covering it – especially if it’s open-access. Ars here is mercifully concerned with methodology; many outlets don’t give a shit.)

    Conclusion is as follows (for expedience; I encourage reading other parts):

    As AI becomes ubiquitous in society, understanding how it reshapes human thought is essential. Tri-System Theory [author’s note: introduced in this paper; tenuous to call it a “theory” on that basis] offers a new framework for this cognitive frontier. By introducing System 3 (Artificial) as a distinct and external reasoning process, we move beyond the classical architecture of dual-process theories and chart a new decision-making paradigm: one where intuition, deliberation, and artificial cognition coexist, compete, or converge. We show that people not only use System 3 to assist with reasoning, but often surrender to its outputs whether correct or flawed. This cognitive surrender illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage. Similar to how System 1-driven heuristics lead to systematic biases, System 3 has differential cognitive shortcomings that will challenge decision-makers and society at large.

    Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. [author’s note] In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender? We offer Tri-System Theory as a conceptual foundation for understanding these challenges. It is a theory for an age of human-AI algorithmic cognition, and for the decision-makers, researchers, and designers shaping that future

    • Tiresia@slrpnk.net
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      3 days ago

      I think this paper is overly exoticizing AI. People have always been externalizing deliberation to others, be they parents, friends, bosses, partners, gods, spirits, journalists, advertisers, superstitions, tarot cards, or rubber ducks.

      Perhaps it is worth calling all of these “system 3”, but I see no reason to separate LLMs from them. Our judgment has never been our own entirely, and even if there is nobody else to defer to we can defer to “what they would do”.

      We accept that these external sources are flawed and can give us bad advice that we follow, but we keep listening as long as we think that is made up for by good advice or other factors.

      • OpenStars@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        People have been using “argument by authority” since before language was invented.

        Otoh, this article has to sell its clicks so… all-new terminology it is then.

    • mfed1122@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Yuck. This petty observation is unworthy of being called System 3. Stealing valor from Kahneman and Tversky. Keep their terminology out of your mouths, trend chasers

    • Mothra@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Maybe… But I guess so does branding of many sorts. People rarely question the efficiency and/or safety (or the moral integrity in the manufacturing process) of a lot of products. Foods, cosmetics and medicines would be the first categories that spring to mind which are regularly abused and misused by population at large.

      So yes my point being perhaps religion has been doing this for centuries but it’s not like there wasn’t any other case

  • BillyClark@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 days ago

    If you’re willing to abandon your thinking to AI, I’m guessing you weren’t too attached to it in the first place.

    If we let people freely continue in this manner, we’re going to evolve into two separate species, one of which we might as well call the Eloi, completely unable to think or perform any tasks for themselves.

    • Brave Little Hitachi Wand@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      3 days ago

      I get you, but it’s important nuance that cognitive surrender is closely associated with external incentives and time pressure. If you’re being paid to do a boring task, e.g., and you don’t have enough time to do it, using AI is just the path of least resistance. I don’t condone it, but I can clearly see how it happens. It’s fun to talk shit on idiots though, as well.

        • supersquirrel@sopuli.xyzOP
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 day ago

          That says more about how you are only open to seeing religious cults as cults and do not see other thought terminating ideologically charged movements as cults.

          A worship of capitalism as “natural” is a cult, fad diets are 10000% all cults, multi-level-marketing schemes are cults. Cults are EVERYWHERE, identifying only the outwardly religious as cults is to only see the tip of the iceberg.

  • Berengaria_of_Navarre@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    3 days ago

    The whole point of ai is to train people out of critical thinking. It started with shitting all over / underfunding the arts, then turning schools into employee training camps and now, to remove any last residue of free thought, ai to fill the gaps.

  • okwhateverdude@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    I dunno, I find this entirely unsurprising. And I bet this also correlates strongly with political identity too: authoritarians love gullible idiots that vote for them. Humanity is fucking stupid in aggregate

  • danh2os@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    “those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses” Yep.

  • Asetru@feddit.org
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    3 days ago

    When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

    This is the first paragraph of the article and I’m already up in arms against the writing… Painting it as a “two sides” situation with people that like AI on one side and people that like AI differently on the other side is just too off-putting. Did an AI write this?

    • Mothra@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      Well, it says generally, and broad categories which leaves room for other cases not accounted for, apparently because they are a minority. One category of people who don’t question it, another category which remains open to the possibility of ai making mistakes.

      I’m curious, in your opinion, which are the other big groups of people that the article failed to mention?

            • Mothra@mander.xyz
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              Frequency of use it doesn’t interfere with what they are trying to measure: whether users consider the possibility of inaccurate answers, or whether they don’t.

              If frequency of use is taken into account, and they are only considering users who regularly use ai, then people who try to avoid using ai isn’t part of the data pool. These people belong to the minority we established irrelevant to the study.

              If, however, they are still surveying people who rarely use ai as well as frequent users, these people can still belong to either of the two categories they are studying: those who generally consider the possibility of receiving inaccurate answers, and those who don’t.

              Previously you said there are more groups of people which prove the dicothomy to be false, but I fail to see it that way.

              • Asetru@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 days ago

                For the study that’s fine. I never argued the study should have more groups.

                It’s the article that should be more precise. An article that starts with “there are these two groups” for a study that simply studied these two groups but never said there weren’t more is wrong. So that’s bad writing.

    • Skua@kbin.earth
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      Is that not just the article reflecting the study it’s talking about? It has users either accept or override the chatbot’s answer

      • Asetru@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        Which is fair for a study but doesn’t mean that there are only those two groups in society, which the article suggests.