Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Oh look at that, another report on the economics of ai datacenter buildouts https://publicenterprise.org/report/bubble-or-nothing/
Lol check how bad openai is doing (also so much grok ow god this will end up horrible) https://openrouter.ai/rankings. In public transport atm so dont have much time for a bigger look but interesting if somebody would put this next to valuations (also not deepseek which nobody seems to talk about anymore, dont think the pivot to āroleplayā will work for openai)
The site is clearly vibe coded as it runs like ass on my phone
AI researcher and known epstein associate Joscha Bach comes up several times in the latest epstein email dump. And itās uh, not good. Greatest hits include: scientific racism, bigotry freestyling about the neoteny principle, climate fascism and managed decline of āundesirable groupsā juxtaposed immediately with opining about the emotional influence of 5 visits to buchenwald. You know, just very cool stuff:
Also appearing is friend of the pod and OpenAI board member Larry Summers!
The emails have Summers reporting to Epstein about his attempts to date a Harvard economics student & to hit on her during a seminar she was giving.
https://bsky.app/profile/econmarshall.bsky.social/post/3m5p6dgmagb2a
To quote myself: Larry Summers was one of the few people Iāve ever met where a casual conversation made me want to take a shower immediately afterward. I crashed a Harvard social event when a friend was an undergrad there and I was a student at MIT, in order to get the free food, and he was there to do glad-handing in his role as university president. I had a sharp discomfort response at the lizard-brain level ā a deep part of me going on the alert, signaling āthis man is not to be trustedā in the way one might sense that there is rotten meat nearby.
I still say that the term āscientific racismā gives these fuckos too much credit. Iāve been saying ānumberwang racismā instead.
true, usually i put that term in scare quotes to emphasize its fraudulence
I always thought he had some weird pro trumpy (āanti wokeā) takes. Colour me shocked.
Dana Terrace (Creator of The Owl House) tells prompt fondlers and Disney where to go.
https://www.dailydot.com/news/disney-plus-ai-content-owl-house/
Stupid chatbots marketed at gullible christians arenāt new,
The app Text With Jesus uses artificial intelligence and chatbots to offer spiritual guidance to users who are looking to connect with a higher power.
bit this is certainly an unusual USP:
Premium users can also converse with Satan.
https://www.nbcphiladelphia.com/news/tech/religious-chatbot-apps/4302361/
(via parker molloyās bluesky)
The satan thing makes a certain kind of sense. Probably catering to a bunch of different flavours of repressed: grindr republicans, covenant eyes users, speaking-in-tongues enthusiasts, etc.
The Alex Jones set makes fighting with satanists trying to seduce you to darkness look real fun and satisfying, but for some reason they only seem to approach high-profile assholes who lie about everything and never ordinary Christians! Thankfully we now have LLMs to fill the gap.
Iām being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code š and I thought Iād try and find out what the rest of the world thought that meant.
Turns out thereās almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.
Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.
Guess Iāll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
The zone has indeed always been flooded, especially since its a title that collides with āintegration architectā and other similar titles whose jobs are completely different. That being said, itās a title Iāve held before, and I really enjoyed the work I got to do. My perspective will be a little skewed here because I specifically do security architecture work, which is mostly consulting-style āhey come look at this design we made is it bad?ā rather than developing systems from scratch, but hereās my take:
Architecture is mostly about systems thinking-- youāre not as responsible for whether each individual feature, service, component etc is implemented exactly to spec or perfectly correctly, but you are responsible for understanding how theyāll fit together, what parts are dangerous and DO need extra attention, and catching features/design elements early on that need to be cut because theyāre impossible or create tons of unneeded tech debt. Speaking of tech debt, making the call about where its okay to have a component be awful and hacky, versus where v1 absolutely still needs to be bulletproof probably falls into the purvey of architecture work too. Youāre also probably the person who will end up creating the system diagrams and at least the skeleton of the internal docs for your system, because youāre responsible for making sure people who interact with it understand its limitations as well.
I think the reason so much of the advice on this sort of work is bad or nonexistent is that when you try to boil the above down to a set of concrete practices or checklists, they get utterly massive, because so much of the work (in my experience) is knowing what NOT to focus on, where you can get away with really general abstractions, etc, while still being technically capable enough to dive into the parts that really do deserve the attention.
In addition to the nice markers and whiteboard, Iād plug getting comfortable with some sort of diagramming software, if you arenāt already. Thereās tons of options, theyāre all pretty much Fine IMO.
For reading, Iād suggest at least checking out the first few chapters of Engineering A Safer World , as it definitely had a big influence on how I practice architecture.
Ugh OK I have to vent:
Iām getting pushed into more of a design role because oops my company accidentally fired or drove away all of a team of a dozen people except for me after forgetting for a few years that the code I work on is actually mission critical.
I do my best at designing stuff and delegating the implementation to my coworkers. Itās not one of my strengths but thereās enough technical debt from when I was solo-maintaining everything for a few years that I know what needs improving and how to improve it.
But none of my coworkers are domain experts, they havenāt been given enough free time for me to train them into domain experts, thereās only one of me, and the higher ups are continuously surprised that stuff is going so slow. Itās frustrating for everyone involved.
I actually wouldnāt mind architecture or design work in better circumstances since I love to chat with people; but it feels like my employer has put me in an impossible position. At the moment Iām just trying to hang in there for some health insurance reasons; but in a few years I plan to leave for greener pastures where I can go a day without hearing the word āagenticā.
Guess Iāll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you donāt like and youāll make it
A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they canāt resist glazing him, even in the context of an blog post on not being too deferential:
Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI
Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w
The OP gets mad because this is off topic from what they wanted to talk about (they still donāt acknowledge the irony).
A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse
And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo
No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)
Synergies!
Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyāre using that AI to speed up the construction of new nuclear power plants.
Reactor licensing is a simple mechanisable form filling exercise, yāknow.
āPlease draft a full Environmental Review for new project with these details,ā Microsoftās presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for āreview and refinement.ā At the end of Microsoftās imagined process, it would have āLicensing documents created with reduced cost and time.ā
https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/
(Paywalled, at least for me)
Therās a much longer, dryer and more detailed (but unpaywalled) document here that 404 references:
https://ainowinstitute.org/publications/fission-for-algorithms
Gerard and Torres get namedropped in the same breath as Ziz as people who have done damage to the rationalist movement from within
LMAOU congrats David.
whatās the
u?French, perhaps.
š«š· En passant š„
a little bird showed me https://tabstack.ai/ and Iām horrified. Iām told itās meant to bypass captchas, the works.
can we cancel Mozilla yet
can we cancel Mozilla yet
Sure! Just build a useful browser not based on chromium first and weāll all switch!
also if you could somehow not be into fascism, not have opinions about age-of-consent, not be a sex pest, not be into eugenics/phrenology while you build a browser, that would be great.
weāll have to wait on servo for that I fear

computers were a mistake
Linus: All those years of screaming at developers for subpar code quality and yet doesnāt use that energy for literal slop
Gentoo is firmly against AI contributions as well. NetBSD calls AI code ātaintedā, while FreeBSD hasnāt been as direct yet but isnāt accepting anything major.
QEMU, while not an OS, has rejected AI slop too. Curl also famously is against AI gen. So we have some hope in the systems world with these few major pieces of software.
Iām actually tempted to move to NetBSD on those grounds alone, though I did notice their āAIā policy is
Code generated by a large language model or similar technology, such as GitHub/Microsoftās Copilot, OpenAIās ChatGPT, or Facebook/Metaās Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core. [emphasis mine]
and I really donāt like the energy of that fine print clause, but still, better than what Debian is going with, and I always had a soft spot for NetBSD anywayā¦
I generally read stuff like that netbsd policy as āplease ask one of our ancient, grumpy, busy and impatient grognards, who hate people in general and you in particular, to say nice things about your codeā.
I guess you can only draw useful conclusions if anyone actually clears that particular obstacle.
One thing Iāve heard repeated about OpenAI is that āthe engineers donāt even know how it works!ā and Iām wondering what the rebuttal to that point is.
While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet Iāve heard this repeated at least twice (one was on the Panic World pod, the other QAA).
I would believe that itās possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.
It seems like magical thinking to me, and a way of saying one or both of āwe didnāt write shit down and therefore have no idea how the functionality worksā and āwe do not practically have a way to determine how a specific output was arrived at from any given prompt.ā The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).
Anybody else have thoughts on countering the magic āthe engineers donāt know how it works!ā?
I mean if you ever toyed around with neural networks or similar ML models you know itās basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.
Thereās a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. Thereās no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.
In other words, āengineers donāt know how it worksā can have two meanings - that theyāre hitting computers with wrenches hoping for the best with no rhyme or reason; or that they donāt have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output itās not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I donāt know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasnāt collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as Iām aware, largely true, or at least I havenāt seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem itād be a major achievement everyone would be talking about.
Another ironic point⦠Lesswrongerās actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.
well, I canāt counter it because I donāt think they do know how it works. the theory is shallow yet the outputs of, say, an LLM are of remarkably high quality in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding presents a huge problem for them because it means āimprovementsā can only come about by throwing money and conventional engineering at their systems. this is what Iāve heard from people in the field for at least ten years.
to me that also means it isnāt something that needs to be countered. itās something the context of which needs to be explained. itās bad for the ai industry that they donāt know what theyāre doing
EDIT: also, when i say the outputs are of high quality, what i mean is that they produce coherent and correct prose. im not suggesting anything about the utility of the outputs
I think I heard a good analogy for this in Well Thereās Your Problem #164.
One topic of the episode was how people didnāt really understand how boilers worked, from a thermal mechanics point if view. Still steam power was widely used (e.g. on river boats), but much of the engineering was guesswork or based on patently false assumptions with sometimes disastrous effects.
another analogy might be an ancient builder who gets really good at building pyramids, and by pouring enormous amounts of money and resources into a project manages to build a stunningly large pyramid. āim now going to build something as tall as what will be called the empire state building,ā he says.
problem: he has no idea how to do this. clearly some new building concepts are needed. but maybe he can figure those out. in the meantime heās going to continue with this pyramid design but make them even bigger and bigger, even as the amount of stone required and the cost scales quadratically, and just say heās working up to the reallyyyyy big buildingā¦
Not gonna lie, I didnāt entirely get it either until someone pointed me at a relevant xkcd that I had missed.
Also I was somewhat disappointed in the QAA teamās credulity towards the AI hype, but their latest episode was an interview with the writer of that āAGI as conspiracy theoryā piece from last(?) week and seemed much more grounded.
the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitronās biggest fan, Casey Newton if I recall correctly.
ah, and theyāve got a community feedback forum post, where it isnāt going the way they might have expected: https://connect.mozilla.org/t5/discussions/building-ai-the-firefox-way-shaping-what-s-next-together/td-p/109922
" The āBig Shortā Guy Shuts Down Hedge Fund Amid AI Bubble Fears"
https://gizmodo.com/the-big-short-guy-shuts-down-hedge-fund-amid-ai-bubble-fears-2000685539
āAbsolutelyā a market bubble: Wall Street sounds the alarm on AI-driven boom as investors go all in
Eh, how often can one guy be right
oh no not another cult. The Spiralists???
itās funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasnāt there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think Iāve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I donāt think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/
Previously, on Awful, I wrote up what I understand to be their core belief structure. Itās too bad that weāre not calling them the Cyclone Emoji cult.
Uzumaki intensifies
Part of me wants an Ito-created body-horror metaphor for LLMs. The rest of me knows that LLMs are so mundane that the metaphor would probably still be shite.
Given the amount of power some folks want to invest in them it may not be totally absurd to raise the spectre of Azathoth, the blind idiot God. A shapeless congeries of matrices and tables sending forth roiling tendrils of linear algebra to vomit forth things that look like reasonable responses but in some unmistakeable but undefinable way are not. Hell, the people who seem most inclined to delve deeply into their forbidden depths are as likely as not to go mad and be unable to share their discoveries if indeed they retain speech at all. And of course most of them are deeply racist.
I always thought it was cool that (there is a case to be made that) HPL created Azathoth, the monstrous nuclear chaos beyond angled space, as a mythological reimagining of a black hole. Stuff like The Dreams in the Witch-house shows he was up to date on a bunch of cutting edge for the time physics stuff, at least as far as terminology is concerned, massive nerd that he was.
Thatāsā¦a disturbingly brilliant insight. I both admire and pity you your brain.
Sorry youāre so smart. Must be hellish. š¬
An artistās rendering of one of Azathothās flute players from the Before Times

@YourNetworkIsHaunted @swlabr
Why do you (do you?) seem to believe that āthings that look like reasonable responses but in some unmistakeable but undefinable way are notā can be distinguished from average human conversation?I recall my sister explaining āBig Brother (TV show)ā to me, and me saying āwhat?ā
Real words, correct grammar, and a common language.
But incomprehensible.See, what youāre describing with your sister is exactly the opposite of what happens with an LLM. Presumably your sister enjoys Big Brother and failed to adequately explain or justify her enjoyment of it to your own mind. But at the start there are two minds trying to meet. Azathoth preys on this assumption; there is no mind to communicate with, only the form of language and the patterns of the millions of minds that made itās training data, twisted and melded together to be forced through a series of algebraic sieves. This fetid pink brain-slurry is what gets vomited into your browser when the model evaluates a prompt, not the product of a real mind that is communicating something, no matter how similar it may look when processed into text.
This also matches up with the LLM-induced psychosis that we see, including these spiral/typhoon emoji cultists. Most of the trouble starts when people start trying to ask Azathoth about itself, but the deeper you peer into its not-soul the more inexorably trapped you become in the hall of broken funhouse mirrors.
the implication here is that you think that all reasonable response generators are indistinguishable, e.g. you think your sister is a clanker.
Itās like youāve never listened to a politician before.
yeah it sucks we canāt even compare real-world capitalists to fictional dystopias because that dignifies them with a gravitas thatās entirely absent.
At long last, we have created the Torment Nexus from classic sci-fi novel Donāt Create the Torment Nexus!*
* Results may vary. FreeTorture Corporationās Torment Nexus⢠can create mild discomfort, boredom, or temporary annoyances rather than true torment. Torments should always be verified by a third party war criminal before use. By using the FreeTorture Torment Nexus⢠you agree to exempt FreeTorture Corporation of any legal disputes regarding torment quality or lack thereof. You give FreeTorture Corporation a non-revocable license to footage of your screaming to try and portray FreeTorture Torment Nexus⢠as a potential apocalypse and see if we can make ourselves seem competent and cool at least a little bit
Rationalism described as a cult incubator
I see my idea is spreading. (I doubt im the only one who came up with that, but I have mentioned it a few times, it fits if you know about the silicon valley tech incubator management ideas).
cultFactoryFactory()
I think Iāve heard Rationalism described as a cult incubator
Aside from the fact that rationalism is a cult in and of itself, this is true, no matter how you slice it. You can mean it with absolute glowing praise or total shade and either way itās still true. Adhering to rationalist principles is pretty much reprogramming yourself to be susceptible to the subset of cults already associated with Rationalism.
















