Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)


A rival gang of āAIā āresearchersā dare to make fun of Big Yudās latest book and the LW crowd are Not Happy
Link to takedown: https://www.mechanize.work/blog/unfalsifiable-stories-of-doom/ (hearbreaking : the worst people you know made some good points)
LW announcement, check out the karma scores! https://www.lesswrong.com/posts/Bu3dhPxw6E8enRGMC/stephen-mcaleese-s-shortform?commentId=BkNBuHoLw5JXjftCP
Update an LessWrong attempts to debunk the piece with inline comments here
https://www.lesswrong.com/posts/i6sBAT4SPCJnBPKPJ/mechanize-work-s-essay-on-unfalsifiable-doom
Leading to such hilarious howlers as
Ummm pretty sure engaging in a new world war and getting their country bombed to pieces was not on most Germanās agenda. A small group of ideologues managed to sieze complete control of the state, and did their very best to prevent widespread knowledge of the Holocaust from getting out. At the same time they used the power of the state to ruthlessly supress any opposition.
ohai begging the question
A few commentsā¦
Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didnāt really succeed. I donāt think they could have succeeded, given how speculative their stuff is, but if they had, review papers could have tried to consolidate them and then people could actually respond to the arguments fully. (We all know how Eliezer loves to complain about people not responding to his full set of arguments.)
Even with the extent that Anthropicās āresearchā tends to be rigged scenarios acting as marketing hype without peer review or academic levels of quality, at the very least they (usually) involve actual AI systems that actually exist. It is pretty absurd the extent to which Eliezer has ignored everything about how LLMs actually work (or even hypothetically might work with major foundational developments) in favor of repeating the same scenario he came up with in the mid 2000s. Or even tried mathematical analyses of what classes of problems are computationally tractable to a smart enough entity and which remain computationally intractable (titotal has written some blog posts about this with material science, tldr, even if magic nanotech was possible, an AGI would need lots of experimentation and canāt just figure it out with simulations. Or the lesswrong post explaining how chaos theory and slight imperfections in measurement makes a game of pinball unpredictable past a few ricochets. )
The lesswrong responses are stubborn as always.
Yāall arenāt beating the theology allegations.
(Guy in hot dog suit) āWeāre all looking for the person who didnāt do this!ā
I clicked through too much and ended up finding this. Congrats to jdp for getting onto my radar, I suppose. Are LLMs bad for humans? Maybe. Are LLMs secretly creating a (mind-)virus without telling humans? Thatās a helluva question, you should share your drugs with me while we talk about it.