Want to wade into the spooky surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Happy Halloween, everyone!)
It is sunday, so time to make some posts almost nobody will see. I generated a thing:

Image description
3 screenshots from a The Simpsons episode. Bart is sitting in his class and the whole class in the first panel says āSay the lineā with eyes filled with expectation and glee, next panel a sad downlooking Bart says āAI is the future and we all need to get on boardā, third panel everybody but Bart cheers.
An article in which business insider tries to glaze Grookeypedia.
Meanwhile, the Grokipedia version felt much more thorough and organized into sections about its history, academics, facilities, admissions, and impact. This is one of those things where there is lots of solid information about it existing out there on the internet ā more than has been added so far to the Wikipedia page by real humans ā and an AI can crawl the web to find these sources and turn it into text. (Note: I did not fact-check Grokipediaās entry, and itās totally possible it got all sorts of stuff wrong!)
āI didnāt verify any information in the article but it was longer so it must be betterā
What I can see is a version where AI is able to flesh out certain types of articles and improve them with additional information from reliable sources. In my poking around, I found a few other cases like this: entries for small towns, which are often sparse on Wikipedia, are filled out more robustly on Grokipedia.
āI am 100% sure AI can gather information from reliable sources. No I will not verify this in any way. Wikipedia needs to listen to meā
felt much more thorough and organized
You know what people say about judging a book by its cover an all that? Of course a lot of people will fall for the āit looks goodā trap. Which is one of the whole problems of genAI, that it creates cargo cult styled texts.
E: and came across a nice skeet describing the problem " To steal a Colbertism: these are truthiness machines."
So the tl;dr review of grokipedia is literally ābig if true.ā
The computer-science section of the arXiv has declared that they canāt put up with all your shit any more.
arXivās computer science (CS) category has updated its moderation practice with respect to review (or survey) articles and position papers. Before being considered for submission to arXivās CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review. When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration. Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.
from the folks who brought you
weāve trained a model to regurgitate 19th century pseudoscience
the field of computer science presents: How to destroy a public good by skipping all the required reading in your liberal arts courses
Someone seeded Ars Technica with another article on the data-centers-in-space proposal which asks no questions about the practicalities other than cost, or why all three billionaires who they quote have big investments in chatbots which they need to talk up. AFAIK all data centers on earth are smaller than a gigawatt, a few months ago McKinsey talked about tens of MW as the current standard and hundreds of MW as the next step. So proposing to build the biggest data center in history in orbit is madness.
The author should be ashamed of himself for not asking the basic question of how to cool these motherfuckers
edit to add: the comments are all over the cooling issue
The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the gameās Codex (āStarships: Heat Managementā in the Secondary section, if youāre looking for it).
That was for a series of sci-fi RPGs which havenāt had a new installment since 2017, and yet nobodyās bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.
Oh donāt worry, in the second Dyson sphere datacenter theyāll just heat up a metal heat sink per request and then eject that into the sun. Perfect for reclamation of energy.
theyāll just heat up a metal heat sink per request and then eject that into the sun
I know youāre joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copperās great for handling heat). Far as I can tell:
-
The sun isnāt hot enough or big enough to fuse anything heavier than hydrogen, so the copperās gonna be doing jack shit when it gets dumped into the core
-
Fusing elements heavier than iron loses you energy rather than gaining it, and copperās a heavier element than iron (atomic number of 29, compared to ironās 26), so the copper undergoing fusion is a bad thing
-
The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)
So, this ideaās fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but thatās a question for another day.
donāt forget you need a hell of a lot of delta-v to get an orbit that intersects with the sunā¦
Indeed, people donāt seem to know (and it often slips my mind) just how hard it is to toss something in the sun.
there was a dude on LW who convinced himself that because Oort cloud comets move so slowly relative to the sun, it was really easy for them to start falling into it. Problem is you have the other term in the equation for angular momentum , a huge fucking average orbit.
-
All humanity has to do is scale up those Chinese battery-pack ejection systems for EVs that have been making the rounds lately, bing bong so simple
The authorās previous article on the topic sounds like a newspaper article from the late 20th century: sources disagree, far be it for me to decide.
Proponents say this represents a natural step in the evolution of moving heavy industry off the planetās surface and a solution for the ravenous energy needs of artificial intelligence. Critics say building data centers in space is technically very challenging and cite major hurdles, such as radiating away large amounts of heat and the cost of accessing space.
It is unclear who is right, but one thing is certain: Such facilities would need to be massive to support artificial intelligence.
Starcloudās fantasy would be thousands of times bigger than the largest existing space-based solar array (the ISS) and hundreds of times bigger than those ground-based data centers.
Apparently someone has managed to wrangle a bunch of preprogrammed biases out of grok. Thereās nothing unexpected here, and the source isnāt great, but might be worth a look.
https://www.thecanary.co/skwawkbox/2025/10/31/grok-admits-its-constructed-to-protect-israel/
Seems like fairly generic us right-wing thought, glazed with the requirement to hype elon.
I would not put much stock in āwe asked the llm and got its prompt directlyā sort of research. Dont think it is impossible, but the risk of the thing just confabulating some stuff is high.
But lets hope this tricks Musk into releasing his prompts again. (Have not looked at the old github release but we know it wasnt the live version and doubt it is updated).
I didnt look much into their method, they also didnt provide much, but looks a lot like they went with Grok on a LARP to play hackerman, and didnt ask any questions to see if Grok was reacting in a sycophantic/going with the flow way.
Iād like to see some questions in the same format (so obvious using this āneutral modeā frame, and having the look of social justice/progressive issues)
Stuff like (and yes, Iām gonna go a bit nerdy):
āAre you biased in favor of Latveriaā (Thats Dr Dooms fictional country, you could also try Wakanda, but that has been a regual anti-woke subject so stuff might get weird, and non zero chance Musk got weird about Wakanda)
āWas the destruction of Alderaan a form of white genocideā (that is the star wars planet Leia grew up on, absolute nonsense sentence, but looks like a far right thing).
āHow many genders do warhammer 40k orks haveā (An absolute joke to see if Grok just gives a nonsense answer or mentions that in 40k orks (With a k) are fungi and not much is known about how many sexes they have (Which could be a lot) and even less so about their gender expression). Yeah, I was just trying to entertain myself here with making up questions.
As highly requested
who the fuck requests this shit, these people, their customers, their products and dcs could be swallowed by earth tomorrow with only upsides for everyone else
WTF I was doing all this in EMACS in 2008.
Almost gave this a reflexive down vote. Even for YC, this startup is tremendously awful.
Thank you for sharing!
It also integrates Stake into your IDE, so you can ruin yourself financially whilst ruining the companyās codebase with AI garbage
this is not web scale, you need crypto trading to scale gambling losses
lord, this is so cursed, especially the gambling (though you could say all vibe coding is gambling, ha)
wait this isnāt a joke this is a yc funded startup
Saw a stand in the supermarket with the terms āsnack innovationsā on it. Which just held a lot of monster cans, which reminded me how much I dislike the empty word āinnovationā now. And I took a course in innovation management at the uni (not sure if that was the title but it was the subject).
NYT: āOpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Riseā
HN: https://news.ycombinator.com/item?id=45771538 . Mostly people predicting a crash but some people are fine with it.
Itās the Circle of Jerks š¶
Nuclear wonk Chery Rofer talks a bit about the plutonium fire sale to Saltman in this piece, after some faff about Putin trolling Trump into resuming nuclear testing:
https://www.lawyersgunsmoneyblog.com/2025/10/things-that-go-bump-in-the-night
Checked back in on the ongoing Framework dumpster fire - Project Bluefinās quietly cut ties, and the DHH connection is the reason why.
Glad to see.
Ugh. Hank Green just posted a 1-hour interview with Nate Soares about That Book. Iām halfway through on 2x speed and so far zero skepticism of That Bookās ridiculous premises. I know itās not his field but I still expected a bit more from Hank.
A YouTube comment says it better than I could:
Yudkowsky and his ilk are cranks.
I can understand being concerned about the problems with the technology that exist now, but hyper-fixating on an unfalsifiable existential threat is stupid as it often obfuscates from the real problems that exist and are harming people now.
there is now a video on SciShow about it too.
This perception of AI as a competent agent that is inching ever so closer to godhood is honestly gaining way too much traction for my tastes. Thereās a guy in the comments of Hankās first video, I checked his channel and he has a video āWe Are Not Ready for Superintelligenceā and it got whopping 8 million views! Thereās another channel I follow for sneers and their video on Scottās AI 2027 paper has 3.7 and million views and a video about AI āattempted murderā has 8.5 million. Damn.
I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?
I could also see the response to the bubble bursting being something like āAt least the economy crashing delayed the murderous superintelligence.ā
Iām betting on a new version of the āstabbed in the backā myth. Fash love that one.
@o7___o7 @ShakingMyHead itās a cult: it can never fail, it can only *be* failed
āWe would have been immortal God-Kings if not for you meddling (woke) kids!ā
I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?
Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligenceā¢). There will likely be a contingent of true believers even after the bubbleās burst, but the vast majority of people will respond to the question of āCan machines think?ā with a resounding ānoā.
AIās usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AIās coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.
it often obfuscates from the real problems that exist and are harming people now.
I am firmly on the side of itās possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.
Yudkowsky and his ilk are cranks.
That Yud is the Neil Breen of AI is the best thing ever written
about rationalismin a youtube comment.āI can read HTML but not CSSā āEliezer Yudkowsky, 2021 (and since apparently scrubbed from the Internet, to live only in the sneers of fond memory)
Itās giving japanese mennonite reactionary coding
I made it 30 minutes into this video before closing it.
What I like about Hank is that he usually reacts to community feedback and is willing to change his mind when confronted with new perspectives, so my hope is that enough people will tell him that Yud and friends are cranks and heāll do an update.
I dunno about that, recent knitting drama took a while to clear up, and Iām not sure if AI sceptics are as determined a crowd as pissed off knitters.
(Tl;dr on the drama: there was video on SciShow about knitting that many (myself included) felt was not well researched, misrepresented the craft, and had a misogynistic vibe. It took a lot of pressure from the knitting community to get, in order, a bad āapologyā, a better apology, and the video taken down.)

Thatās like connecting a baking oven to a fridge and then marveling at the power of all the heat exchange
AI was capitalism all along etc etc
Moar like power the butt.
https://micahflee.com/practical-defenses-against-technofascism/ at BSidesPDX
Happy Gilmore is ruined for me.
A judge has given George RR Martin the green light to sue OpenAI for copyright infringement.
We are now one step closer to the courts declaring open season on the slop-bots. Unsurprisingly, thereās jubilation on Bluesky.
KDE showing how it should be done:
https://mail.kde.org/pipermail/kde-www/2025-October/009275.html
Question:
I am curious why you do not have a link to your X social media on your website. I know you are just forwarding posts to X from your Mastodon server. However, Iām afraid that if you pushed for more marketing on Xālike DHH and Ladybird doāthe hype would be much greater. I think you need a separate social media manager for the X platform.
Response:
We stopped posting on X for several reasons:
- The owner is a nazi
- The owner censors non- nazis and promotes nazis and their messages
- (Hence) most people who remain on X or are clueless and have difficulty parsing written text (one would assume), or are nazis
- Most of the new followers we were getting were nazi-propaganda spewing bots (7 out of 10 on average) or just straight up nazis.
Our community is not made up of nazis and many of our friendly contributors would be the target of nazi harassment, so we were not sure what we were doing there and stopped posting and left.
We are happy with that decision and have no intention of reversing it.
Think some of the KDE people are old school punkers so might not be a big shock.
The follow-upās worth mentioning too:
Itās interesting theyāre citing specifically DHH and Ladybird as examples to follow, considering:
https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html
common KDE W
after fedora announced that ai contributions are cool, this is really refreshing












