They’re not even “stupid” though. It’s more like if you somehow trained a parrot with every book ever written and every web page ever created and then had it riff on things.
But, even then, a parrot is a thinking being. It may not understand the words it’s using, but it understands emotion to some extent, it understands “conversation” to a certain extent – taking turns talking, etc. An LLM just predicts the word that should appear next statistically.
An LLM is nothing more than an incredibly sophisticated computer model designed to generate words in a way that fools humans into thinking those words have meaning. It’s almost more like a lantern fish than a parrot.
Yes, thinking involves signals firing in your brain. But, not just any signals. Fire the wrong signals and someone’s having a seizure not thinking.
Just because LLMs generate words doesn’t mean they’re thinking. Thinking involves reasoning and considering something. It involves processing information, storing memories, then bringing them up later as appropriate. We know LLMs aren’t doing that because we know what they are doing, and what they’re doing is simply generating the next word based on previous words.
LLM’s are the most well-read morons on the planet.
They’re not even “stupid” though. It’s more like if you somehow trained a parrot with every book ever written and every web page ever created and then had it riff on things.
But, even then, a parrot is a thinking being. It may not understand the words it’s using, but it understands emotion to some extent, it understands “conversation” to a certain extent – taking turns talking, etc. An LLM just predicts the word that should appear next statistically.
An LLM is nothing more than an incredibly sophisticated computer model designed to generate words in a way that fools humans into thinking those words have meaning. It’s almost more like a lantern fish than a parrot.
And how do you think it predicts that? All that complex math can be clustered into higher level structures. One could almost call it… thinking.
Besides we have reasoning models now, so they can emulate thinking if nothing else
No, one couldn’t, unless one was trying to sell snake oil.
No, they can emulate generating text that looks like text typed up by someone who was thinking.
What do you define as thinking if not a bunch of signals firing in your brain?
Yes, thinking involves signals firing in your brain. But, not just any signals. Fire the wrong signals and someone’s having a seizure not thinking.
Just because LLMs generate words doesn’t mean they’re thinking. Thinking involves reasoning and considering something. It involves processing information, storing memories, then bringing them up later as appropriate. We know LLMs aren’t doing that because we know what they are doing, and what they’re doing is simply generating the next word based on previous words.