• merc@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      4 days ago

      They’re not even “stupid” though. It’s more like if you somehow trained a parrot with every book ever written and every web page ever created and then had it riff on things.

      But, even then, a parrot is a thinking being. It may not understand the words it’s using, but it understands emotion to some extent, it understands “conversation” to a certain extent – taking turns talking, etc. An LLM just predicts the word that should appear next statistically.

      An LLM is nothing more than an incredibly sophisticated computer model designed to generate words in a way that fools humans into thinking those words have meaning. It’s almost more like a lantern fish than a parrot.

      • morrowind@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        And how do you think it predicts that? All that complex math can be clustered into higher level structures. One could almost call it… thinking.

        Besides we have reasoning models now, so they can emulate thinking if nothing else

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          One could almost call it… thinking

          No, one couldn’t, unless one was trying to sell snake oil.

          so they can emulate thinking

          No, they can emulate generating text that looks like text typed up by someone who was thinking.

            • merc@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              3 days ago

              Yes, thinking involves signals firing in your brain. But, not just any signals. Fire the wrong signals and someone’s having a seizure not thinking.

              Just because LLMs generate words doesn’t mean they’re thinking. Thinking involves reasoning and considering something. It involves processing information, storing memories, then bringing them up later as appropriate. We know LLMs aren’t doing that because we know what they are doing, and what they’re doing is simply generating the next word based on previous words.