Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


@fiat_lux @sansruse Whatās to keep the infernal code from ignoring that prompt?
The problem is less that the system would somehow ignore that part of the prompt and more that āhallucinateā or āmake stuff upā arenāt special subroutines that get called on demand when prompted by an idiot, theyāre descriptive of what an LLM does all the time. Itās following statistical patterns in a matrix created by the training data and reinforcement processes. Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldnāt have [insert the entire intellectual history of the human species].
Even if you assume that the AI boosters are completely right and that the LLM inference process is directly analogous to how people think, does saying ādonāt fuck upā actually make people less likely to fuck up? Like, the kind of errors youāre looking at here arenāt generated by some separate process. Someone who misremembers a fact doesnāt know theyāve misremembered until they get called out on the error either by someone else with a better memory or reality imposing the consequence of being wrong. Similarly the LLM isnāt doing anything special when it spits out bullshit.
Iām chiming in to agree with Architeuthis and mention a citation explaining more. LLMs have a hard minimum rate of hallucinations based on the rate of āmonofactsā in their training data (https://arxiv.org/html/2502.08666v1). Basically, facts that appear independently and only once in the training data cause the LLM to ālearnā that you can have a certain rate of disconnected āfactsā that appear nowhere else, and cause it to in turn generate output similar to that, which in practice is basically random and thus basically guaranteed to be false.
And as Architeuthis says, the ability of LLMs to āgeneralizeā basically means they compose true information together in ways that is sometimes false. So to the extent you want your LLM to ever āgeneralizeā, you also get an unavoidable minimum of hallucinations that way.
So yeah, even given an even more absurdly big training data source that was also magically perfectly curated you wouldnāt be able to iron out the intrinsic flaws of LLMs.
Thank you! Let me wildly oversimplify and make sure I understand.
The fundamental problem is that if you train on a set that includes multiple independent facts, the generative aspect of the model - the ability to generate new text that is statistically consistent with the training data - requires remixing and combining tokens in a way that will inevitably result in factual errors.
Like, if your training data includes āall men are mortalā and āall lions are catsā then in order to generate new text it has to be ālooseā enough to output āall men are catsā. Feedback and reinforcement can adjust the probabilities to a degree, but because the model is fundamentally about token probabilities and doesnāt have any other way of accounting for whether a statement is actually true, thereās no way to completely remove it. You can reinforce that āall cats are mortalā is a better answer, but you canāt train it that āall men are catsā is invalid.
Youāve described the problem with generalization yes. Well, you could maybe sort of train it not to generate āall men are catsā, but then that might also prevent it from making the more correct generalization āall cats are mortalā or even completely valid generalizations like combing āall men are mortalā and āSocrates is manā to get āSocrates is mortalā.
The problem with monofacts is a bit more subtle. Letās say the fact that āJohn Smith was born in Seattle in 1982, earned his PhD from Stanford in 2008, and now leads AI research at Tech Corp,ā appears only once in the training data set. Some of the other words the model will have seen multiple times and be able to generate tokens in the right way for. Like Seattle as a location in the US, Stanford as a college, 2008 as a date, etc. But the combination describing a fact about John Smith appearing uniquely trains the model to try to generate facts that are unique combinations of data. So the model might try to make up a fact like āJane Doe was born in Omaha in 1984, earned her master from Caltech in 2006, and is now CEO of Tech Corpā because it fits the pattern of a unique fact that was in its training data set.
Just wanted to say that that ātalā comes after āmorā when āsoc-rate-sā is in the near context and in agreement with the attention mechanism is a very different type of logic than what this phrasing implies. This is also in combination with the peculiarities of word embeddings (the technique by which the tokens are translated to numeric vectors) like how it has a hard time making something useful out of numbers, it uh gets uh complicated.
The monofacts thing seems very post hoc and way too abstracted in comparison, and also the amount of text that can be categorized as strictly true or false isnāt that big all things considered.
Still if the point was to formalize the very no-duh observation that a neural net isnāt supposed to output itās dataset verbatim at all times hence hallucinations, then fine, I guess. Their proposed sort of solution (controlled miscalibration) even amounts to forcing the model to generalize less by memorizing more, which used to be the opposite of why you would choose to use this type of topography.
Yeah, it does seem to be running into the basic issue that what boosters want LLMs to be (all knowing oracle) is in sharp contrast to what LLMs actually are (churn out statistically plausible content).
Thatās really interesting. So the model can generalize the form of what a fact looks like based on these monofacts but ends up basically playing mad libs with the actual subjects. And if I understand the inverse correlation they were describing between hallucination rate and calibration, even their best mechanism to reduce this (which seems to have applied some kind of back-end doubling to the specific monofacts to make the details stand out as much as the structure, I think?) made the model less well-calibrated. Though Iām not entirely sure what āless well-calibratedā amounts to overall. I think theyāre saying it should be less effective at predicting the next token overall (more likely to output something nonsensical?) but also less prone to mad libs-style hallucinations.
That would only work if inference were some sort of massive if-the-else process. Hallucinations are downstream of neural networksā ability to generalize from the dataset examples, they arenāt going anywhere even if you train on a corpus of perfectly correct statements.
@YourNetworkIsHaunted @StumpyTheMutt ⦠Now Iām curious what a model does if the prompt contains āDo not think of pink elephants.ā
For the chain of thought instruction following model gpt-oss-20b, Iāve noticed its reasoning content often includes it talking about stuff it is supposed to avoid in the final output and it double checking that it doesnāt have that forbidden output. So it would waste tokens talking about pink elephants in its reasoning content, but then do okayish at avoiding pink elephants in its final output.
This would actually be an interesting question for the more rigorous end of the mechanistic interpretability people to study. They decompose the system to find āfeaturesā within different layers that are associated with different behaviors or concepts in the inputs and outputs, that activate or deactivate each other. Famous example being the time they identified a linear combination of activations in a layer that corresponded to āthe golden gate bridgeā and when they reached in and kept their numbers high during the running of the model it would not stop talking about it regardless of the topic, even while acknowledging that its answers were incorrect for the questions at hand.
I actually would love to see what mechanistically happens to that feature when you put in the input ādo not talk about the golden gate bridgeā.
@ysegrim @YourNetworkIsHaunted @StumpyTheMutt in my experience that makes it much more likely to generate stuff related to pink elephants.
@ysegrim @YourNetworkIsHaunted Do LLMs dream of electric slop?