Summaries and shortcuts can provide surface-level knowledge, but the true benefits of reading—expanded perspective, personal growth, and the joy of discovery—are only realized through immersive, attentive reading. In a world that values “time efficiency” above all else, the richness and depth of art are flattened, and the very qualities that make us human—our capacity for reflection, connection, and wonder—are diminished.
OP, LLMs don’t “know” shit. When they say something that conforms to a preexisting bias of yours, that’s nothing. That should affect the strength of your argument in no capacity. It’s not a knowledge base; it’s a transformer model that exists to tell you what you’re most likely to want to hear given what’s come before.
The part of the anti-AI crowd who denounce rampant, uncritical use of LLMs but who also shit their pants and clap every time an LLM says something against LLMs tells me they don’t have even a bare minimum understanding of machine learning or of cognitive biases like confirmation bias.
(Your link results in an internal runtime error btw.)
Perplexity does those weird runtime errors all the time. Just hit refresh. It eventually wakes up.
OP, LLMs don’t “know” shit.
You’ll find me making this exact point, incidentally, right here in this forum. I’m well aware that LLMbeciles know literally nothing. And that the “reasoning” models don’t do anything that even slightly resembles reasoning.
Even AIs know this is bullshit.
OP, LLMs don’t “know” shit. When they say something that conforms to a preexisting bias of yours, that’s nothing. That should affect the strength of your argument in no capacity. It’s not a knowledge base; it’s a transformer model that exists to tell you what you’re most likely to want to hear given what’s come before.
The part of the anti-AI crowd who denounce rampant, uncritical use of LLMs but who also shit their pants and clap every time an LLM says something against LLMs tells me they don’t have even a bare minimum understanding of machine learning or of cognitive biases like confirmation bias.
(Your link results in an internal runtime error btw.)
Perplexity does those weird runtime errors all the time. Just hit refresh. It eventually wakes up.
You’ll find me making this exact point, incidentally, right here in this forum. I’m well aware that LLMbeciles know literally nothing. And that the “reasoning” models don’t do anything that even slightly resembles reasoning.