• mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    2 months ago

    “Said?”

    You can’t ask these things what they did, or why they did it, and expect a straight answer. That’s not how they work.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      I don’t think people realise that they’re basically fancy dice, turning noise into words based on probability. You could theoretically do the same with dice rolls and one hell of an over-complicated word lookup chart.

      You can’t ask an LLM about its intentions and history for the same reasons you can’t ask a pair of dice about their intentions or their history.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        There’s enough going on inside to raise questions about what we consider thinking - but it’s crystal clear this shape of model is not self-aware. You can swap some names and it’ll hold your side of the argument without missing a beat.

        From the other direction, we have to acknowledge that people can also make up reasons for something they did on autopilot. Yes, sometimes you are describing a conscious process. Other times you did a thing, someone asks what the fuck, and your brain constructs a plausible motive after-the-fact.

        Dumb as these models are, we shouldn’t oversimplify. They’re smart enough that we can call them stupid. They have a measurable IQ. If that’s possible with just dice rolls, what does that say about meatbags like us?