That conversation, that dataset, knowledge base gets too big?
Well the LLM now gets slower and less efficient, has to compare and contrast more and more contradictory data, to build its heuristics out of.
It has no ability to meta-cognate. It has no ability to discern, and disregard bullshit, both as raw data points, and bullshit processes for evaluating and formulating concepts and systems.
The problem is not that they know too little, but that they know so much that isn’t so is pointless contradictory garbage.
When people learn and grow and change and make breakthroughs, they do so by shifting to or inventing some kind of totally new mental framework for understanding themselves and/or the world.
You are right and I have seen some people try some clumsy solutions:
Have the llm summarize the chat context ( this loses information, but can make the llm appear to have a longer memory)
Have the llm repeat and update a todo list at the end of every prompt (this keeps it on task as it always has the last response in memory, BUT it can try to do 10 things and fails on step 1 but doesn’t realize jt)
Have a llm trained with really high quality data then have it judge the randomness of the internet. This is meta cognition by humans using the llm as a tool for itself. It definitely can’t do it by itself without becoming schizophrenic, but it can make some smart models from inconsistent and crappy/dirty datasets.
Again, you are right and I hate using the syncophantic clockwork-orange llms with no self awareness. I have some hope that they will get better.
Heres the main problem:
LLMs don’t forget things.
They do not disregard false data, false concepts.
That conversation, that dataset, knowledge base gets too big?
Well the LLM now gets slower and less efficient, has to compare and contrast more and more contradictory data, to build its heuristics out of.
It has no ability to meta-cognate. It has no ability to discern, and disregard bullshit, both as raw data points, and bullshit processes for evaluating and formulating concepts and systems.
The problem is not that they know too little, but that they know so much that
isn’t sois pointless contradictory garbage.When people learn and grow and change and make breakthroughs, they do so by shifting to or inventing some kind of totally new mental framework for understanding themselves and/or the world.
LLMs cannot do this.
You are right and I have seen some people try some clumsy solutions:
Have the llm summarize the chat context ( this loses information, but can make the llm appear to have a longer memory)
Have the llm repeat and update a todo list at the end of every prompt (this keeps it on task as it always has the last response in memory, BUT it can try to do 10 things and fails on step 1 but doesn’t realize jt)
Have a llm trained with really high quality data then have it judge the randomness of the internet. This is meta cognition by humans using the llm as a tool for itself. It definitely can’t do it by itself without becoming schizophrenic, but it can make some smart models from inconsistent and crappy/dirty datasets.
Again, you are right and I hate using the syncophantic clockwork-orange llms with no self awareness. I have some hope that they will get better.