Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
All participants in the Stubsack, including awful.systems regulars and those joining from elsewhere, are reminded that this is not debate club. Anyone tempted by the possibility of debate-club behavior is encouraged to touch your nearest grass immediately. We are here to sneer, not to bicker: This is a place to mock the outside world, not to settle grand matters of ideology, unless the latter is done in an extraordinarily amusing way.
I need to lurk more, feel like I missed some good drama šæ
If it isnāt on this quick sneer page, you can just look at the posts with a lot of replies, either shows it broke containment, or somebody went full debate mode.
sometimes both
Havenāt seen this skeet posted here. Skeet:
Itās 2050 and a teen girl is torrenting a .tar.gz file of all the consciousnesses of all the tech bros who uploaded themselves into the cloud in a bid for immortality and modding them into The Sims 4
whoās the basilisk now?
My dad was a bit freaked out by a video version (Weāre not ready for super-intelligence)of the āAI 2027ā paper, particularly finding two end scenarios a bit spooky: colossus-style cooperating AIs taking over the world, and the oligarch concentration of power one, which i think definitely echoed sci-fi he watched/read as a teen.
In case anyone else finds it useful these are the āComments as I watch itā, that I compiled for him
Before watching Video Notes:
-
AI Only channel with only 3 videos
-
Produced By ā80000hoursā, which is an EA branch (trying to peddle to you the best way to organize 40years * 50 weeks * 40 hours [I love that they assume only 2 weeks of holidays]); which is definitely cult adjacent: https://80000hours.org/about/#what-do-we-do. Mostly appears to be attempting to steer young people to what they believe are āHigh impactā jobs.
Video Notes:
-
The backing paper is a bit of a joke, one āAI 2027ā, for reference one of the main authors is very much a ācult memberā, Scott Alexander Siskind, author of āSlate Star Codexā and āAstral Codex Tenā.
-
Other authors include [AI Futures Project] :
- Daniel Kokotajlo (podcast co-host of siskind, ex open-ai employee, LessWrong/EA regular)
- Thomas Larsen (ex MIRI [Machine Intelligence Research Institute = really really culty], LessWrong/EA regular)
- Eli Lifland (LessWrong/EA regular)
- Romeo Dean (Astra Fellowship recipient = money for AI Safety research, definitely EA sphere)
-
A lot of fluff trying to hype up the credentials of the authors.
-
AGI does not have a bounded definition.
-
They are playing up the China angle to try and drum up jingoistic support.
-
Exaggerating Chat GPT-3 success, by merely citing āusersā, without mentioning actual revenue, or actual quality.
-
Quote:
How do these things interact, well we donāt know but thinking through in detail how it might go is the way to start grappling with that.
-> I think this epitomises the biggest flaw of their movement, they believe that from āfirst-principlesā itās possible to think hard enough (without needing to confront it to reality) and you can divine the future.
-> You can look up āPrediction Marketsā, which is another of their ontological sins.
-
I will note that the prediction of āAgentsā was not a hard one, since this is what all this circle wants to achieve, and as the video itself points out itās fantastically incompetent/unreliable.
-
Note: This video was made before the release of GPT-5. We donāt know precisely how much more compute altogether GPT-5 truly required, but itās very incremental changes compared to GPT-4. I think this philosophy of āMore trainingā is why OpenAI is currently trying (half-succeeding half failing) to raise Trillions of dollars to build out data-centers, my prediction is that the AI bubble bursts before these data centers come to fruition.
-
Note: The video assumes keeping models secret, but in reality OpenAI would have a very vested interest in displaying capability, even if not making a model available to the public. Also even on consumer models, OpenAI currently loses a bunch of money for every query.
-
Note: The video assumes āSingularitarianismā, of ever acceleration in quality of code, and thatās why they keep secret models. I think this hits a compute/energy wall in real life, even if you assume that LLMs are actually useful for making āqualityā code. These ideas are not new, and these people would raise alarms about it with or without current LLM tech.
-
Specific threats of āBio-weaponā, which a priori can not really be achieved without experimentation, and while āautomatedā labs half exis, they still require a lot of human involvement/resources. Technically grad students could also make deadly bioweapons, but no one is being alarmist about them.
-
Note: āAgent 2ā Continuous Online learning is gobbledyremoved, that isnāt how ML, even today works. At some point there are very diminishing returns, and itās a complete waste of time/energy to continue training a specific model, a qualitative difference would be achieved with a different model. I suspect this sneakily displays āSingularitarianismā dogma.
-
Quote:
Hack into other servers Install a copy of itself Evade detection
-> This is just science-fiction, in the real world these models require specialized hardware to be run at any effective speed, this would be extremely unlikely to evade detection. Also this treats the model as a single entity with single goals, when in reality any time itās ārunā is effectively a new instance.
-
Note: This subculture loves the concept of āscience in secrecyā, which features a lot in the writings of Elizer Yudkowsky. Which is cultish both in keeping their own deeds āin a veil of secrecyā, and helpful here when making a prophecy/conspiracy theory, by making the claim hard to disprove specifically (itās happening in secret!)
-
Note: Even today Chain-of-thought is not that reliable at explaining why a bot gives a particular answer. Itās more analog to guiding āsearchā, rather than true thought as in humans anyway. Them using āAlien-Languageā would not be that different.
-
Agent 3, magically fast-and-cheap, assuming there are now minimum energy requirements. Then you can magically run 200,000 copies of. magically equivalent to 50,000 humans sped up by 30x. (The magic is āexplainedā in the paper by big assumptions, and just equating essentially how fast you can talk with the quality of talking, which given the length of their typical blog posts is actually quite funny)
-
Note: āAlignmentā was the core mission of MIRI/Eliezer Yudkowsky
-
Note: Equating Power and Intelligence a lot (not in this video, but in general being suspiciously racist/eugenicist about it), ignoring the material constraints of actual power [echo: Again the epitomical sin of āIf you just think hard enoughā]
-
Note: Also assuming that trillions of dollars of growth can actually happen, simultaneously with millions losing their jobs.
-
I am betting that the āThere is anotherā part of the video is probably deliberately echoing Colossus.
-
The video casually assumes that the only limits to practical fusion and nanotech just intelligence (instead of potential dead-ends, actually the nanotech part is a particular fancy of theirs, you can lookup ādiamondoid bacteriaā on LessWrong if you want a laugh)
-
The two outcomes at the end of the video are literally robo-heaven and robo-hell, and if you just follow our teachings (in this case slow-downs on AI) you can get to robo-heaven. You will notice they donāt imagine/advocate for a future with no massive AI integration into society, they want their robo-heaven.
-
Quote:
None of the experts are disagreeing about a wild future.
-> I would say specifically some of them are suggesting that AGI soon is implausible quite strongly. I think many would agree that right now the future looks dire with or without super-AI, or even regular AI.
Takeaway section:
Yeah this really is a cult recruitment video essentially.
Weāre almost at the end of 2025 and agents donāt fucking exist the way they predicted. Literally 0% acc so far. Ai 2027 agmi.

^image of Daniel K who already updated his rapture prophecy to 2029 because heās a mark
I stumbled onto that vid a while back, watched the first minute or so, lolāed at the glazing of kokotajlo, and stopped the vid. I did think about posting it here to be torn apart but forgot about it. I watched a little bit further and got āthey chose to write this as a narrativeā of course they fucking did. Itās their one thing. Write a shitty 10k word story that amounts to some combination of āreally makes you thinkā and ābig if trueā.
Hereās a story: Once upon a time there was a world. In it people were sad. Then one day swlabr was elected supreme benevolent ruler and then nobody was sad again :) the end. Wow make u think. Many experts agree
-
Last week, we learned that area transphobe Sabine Hossenfelder is using her arXiv-posting privileges to shill Eric Weinsteinās bullshit. I have poked around the places where Iād expect to find technical discussion of a physics preprint, and Iāve come up with nothing. The Stubsack thread, as superficial as it was, has been the most substantive conversation about her postās actual content.
Wrong link. this points to the NeurIPS post for this week.
Good catch; thanks. I think I had too many awful.system tabs open at once.
today in I fucking called it fedora aka mostly red hat has decided to allow slop code in a way that violates even their utterly mid stated principles around the tech
if youāre downstream from any fedora packages (and I donāt know the scope of this policy so it might be safe to consider anything owned by red hat in general to be tainted ā yes I realize most of us are downstream from a bunch of red hat shit) it might be time to evaluate an alternative if available
among others, so many systemd and libvirt things :|
fortunately a long-ish tail on a lot of that, but fucking still
deleted by creator
New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants ā already a daily information gateway for millions of people ā routinely misrepresent news content no matter which language, territory, or AI platform is tested. [ā¦] 45% of all AI answers had at least one significant issue.
-
31% of responses showed serious sourcing problems ā missing, misleading, or incorrect attributions.
-
20% contained major accuracy issues, including hallucinated details and outdated information.
-
Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
And yet the BBC still has a Programme Director for āGenerative AIā who gets trotted out to say āWe want these tools to succeedā. No, we donāt, you blithering bellend.
@blakestacey @BlueMonday1984 I also want my Perpetual Motion Machine and Circle-Squaring Algorithm to succeed, but what are ya gonna do? š¤·āāļø
-
In lighter news, this anti-LLM rhyme made me chuckle:
I will not talk with a chatbot
I do not want it while I shopI do not want it on Windows X-box
I do not want it in FirefoxI do not want it in my house
I do not want it on my mouse
I do not want it here or there
I do not want it anywhere.I do not want AI and Spam
I do not want them Sam-Alt-ManI suppose it is an iambic tetrameter, but the third and fourth lines do not fit.
Thatās how you know a human wrote it
You donāt recognize it?
No, I donāt.
one of todayās lucky 10,000
very short childrenās book, with intentionally atypical rhythm, by Dr Seuss
written in call-response style in dialogue between two characters (unnamed and Sam-I-Am)
https://www.readstoriesforkids.com/Green-Eggs-and-Ham-text.html - text without images, but best enjoyed with the images
the full book on The Internet Archive https://archive.org/details/greeneggsham0000unse/
dr seuss - green eggs & ham
deleted by creator
Another attempt to platform fascists has cropped up in FOSS, and Drew DeVaultās talked all about it. Featuring our good friend Curtis Yarvin.
of course the organization I know primarily for platforming fascists and astroturfing on YouTube was secretly an even worse grift and somehow tied in with Yarvin, why wouldnāt it be
given that Rossmannās at the head of this thing too, Iām starting to regret not taking GrapheneOS (who, notably, were also a target for this grift) seriously when they said Rossmannās involved in a bunch of terrible shit. the right to repair deserves a better figurehead.
fuckin pisses me off, given his clippy campaign is helping move pivot shirts
sigh
I WILL NOT CHANGE, CLIPPY SUCKED FIRST
Damn right. He needs to quit, heās the one who sucks.
The fash donāt have magic doodoo fingers that obligate decent people to surrender every time they touch something we like, and we should never concede as if they do.
hadnāt been aware that rossmanās into dodgy stuff (knew fairly little about him outside of some repair stuff on his channel), but ugh
also clicking through into FUTOās projects and itās all a bit gravitating around a point, ābuilt on polycentricā. so I wonder what that means?
Polycentric is an open-source, distributed social network that lets you publish content to multiple servers.
already at āIām interestedā because itās interesting to see what other work happens in this space.
and then very next sentence we get to
If youāre censored on one server, your content remains accessible from other servers
ah. I see. the āopt-out moderationā is also telling - how does it work? who knows! itās got a paragraph under introduction but seems to not be mentioned anywhere else in the docs.
extra frustrating to see because the projects these fucks are taking on (like the open cast thing) are items that sorely need stronger options in the open space. but not like this. never like this.
Ah, itās another Urbit isnāt it?
certainly has more than a bit of that urbit coiner Sovereign Individual shit going on yeah
I tried looking around a bit to see if I could find any info about contributors there, and for the most part none of them really seem to have much internet fingerprint at all. did find one person with a moderately extensive set of personal repo/project commits spanning back a few years, spanning long enough so as to find that they were doing a BSc/Hons/something circa 2018. which isnāt concrete but does strongly hint at a current age of mid 20s to mid 30s. āget 'em while theyāre young and you can poison their brains early!ā - the bayfucker mantra
god damn it. i guess the name of the founder might have been a hint, only one letter away from our favorite roman saluter.
i use immich, one of the projects they seem to have actually funded in a big way. itās a very good selfhosted replacement for google photos. at least the license is actually open source, as opposed to grayjay, so hereās hoping it has a future in case the fascists try to fuck with it.
i guess the problem though isnāt with the funding and/or control of individual projects, itās with the long-term influence in the foss community they seem to be after.
i had a feeling about FUTO because of rossmannās involvement. became leery of him after this youtube bullshit from 2018:
Letās discuss why journalists are afraid of Elon Musk right now(and why they deserve to be)
Elon Musk wants to come up with a way to rate the credibility and accuracy of media organizations & individual journalists. This blatant misrepresentation of his words, given in the middle of this conversation, is a PERFECT example of WHY this is so badly needed in modern society.
Iām not a fan of Tesla for being, in many ways, the āApple of cars.ā That being said, whether or not I like Tesla when it comes to a repair standpoint has nothing to do with the hate being thrown at Elon for something he never meant in the words he said, and is entirely separate from my agreement with him on the idea of a media credibility rating platform.
This is not a sneer so much as a sneer request; anyone know of any good articles written about the total hypocrisy of the Free Speech brigade since the inauguration? By far the most anti-speech environment in decades and most of them are still just whining about pronouns on campus or whatever.
(Yes; FIRE has passed this very basic test and has occasionally switched topics from whining about āleftist professorsā to saying stuff like āitās not great that weāre deporting people for writing articles for their school paper about how genocide is badā. Literally everyone else is a hypocrite)
Biggest examples I know of is Shaunās 4-hour review of the āWar on Scienceā book, and the backlash to the Riyadh Comedy Festival (the whole drama here was hilarious, and not because of the comedy).
Hereās a written review of that book which covers its problems fairly well, I think. (Which indirectly reminded me that last year I wrote a blog post about how Sokal and Bricmontās Fashionable Nonsense wasnāt such hot stuff. I guess I hadnāt shared that here before.)
I also found this Reddit comment that lays into Sokal and Bricmontās treatment of Lacan, but not having read Lacan, I canāt vouch for it:
Iāll just note the sneerability of how Sokal contributed to sex pest Kraussā War on Science book, right alongside Jordan Peterson, who has said plenty of things as batshit as Sokal accused Lacan of being.
TechDirt has posts about this quite often.
The Framework thread caused by the companyās fash turn is still going even after eight full days.
Lotta lowlights to pick from, but the guy openly praising DHH for driving Basecamp straight off a cliff is particularly sneer-worthy:

āApoliticalā is peak red flag these days, eh?
Definitely, itās just code for Iām ok with nazis at this point.
Yeah definitely synonymous with the whole āneutrality sides with the oppressorā thing
More āred hatā than āred flagā, but youāre still dead-on.
I hope itās still going after 8 full years, if the companyās even still in business. Trust is only built back with accountability.
āNot Winston Smith?ā So, OāBrien?
For something lighter, hereās an AI bro getting wowed by the shittiest āvideo gameā Iāve ever seen (trust me, the screenshot doesnāt do it justice):

In lieu of sneering this shit, Iād like to argue that arts education should become mandatory for all students post-bubble, regardless of their profession. STEM, humanities, tech, doesnāt matter - give them four years of art so they donāt turn out like this guy.
https://xcancel.com/TaylorLorenz/status/1980035057067884670
hmm yes, this will surely replace wikipedia.
The idea that AI will be a boon for searching the mathematical literature is undermined somewhat by how it shits the bed there too.
Closely related is a thought I had after responding to yet another paper that says hallucinations can be fixed:
Iām starting to suspect that mathematics is not an emergent skill of language models. Formally, given a fixed set of hard mathematical questions, it doesnāt appear that increasing training data necessarily improves the modelās ability to generate valid proofs answering those questions. There could be a sharp divide between memetically-trained models which only know cultural concepts and models like Gƶdel machines or genetic evolution which easily generate proofs but have no cultural awareness whatsoever.
Every time I hear a moderate AI argument (e.g. AI will be an aid for searching literature or writing code), itās like, āLook, itās impressive that the AI managed to do this. Sure, it took about three dozen prompts over five hours, made me waste another five hours because it generated some completely incorrect nonsense that I had to verify, produced an answer that was much lower quality than if I had just searched it up myself, and boiled two lakes in the process. You should acknowledge that there is something there, even if it did take a trillion dollars of hardware and power to grind the entire internet and all books and scientific papers into a viscous paste. Your objections are invalid because Iām sure things are gonna improve because Progress.ā
I am doubly annoyed when I turn my back and they switch back to spouting nonsense about exponential curves and how AI is gonna be smarter than humans at literally everything.
Wouldnāt f(x) = x^2 + 1 be a counterexample to āany entire (differentiable everywhere) function that is never zero must be constantā? Or are some terms defined differently in complex analysis than in the math I learned?
Iāve never heard of a function being called entire out of complex analysis. But still, it is zero at i.
A fact that AI gets wrong.
flaviat explained why your counterexample is not correct. But also, the correct statement (Liouvilleās theorem) is that a bounded entire function must be constant.
Or Picardās little theorem, which says that if an entire function misses two points (e.g. is never 0 or 1), then that function must be constant.
Oh, I didnāt know that!
Who is flaviat? I donāt see that handle on this lemmy or Greg Eganās mastodon account, and Egan just re-tooted someone who gives x^2 + 1 as a counterexample.
Does this link work for you to see the comment? https://awful.systems/comment/9163259
now it works! I do not understand the two sentences āIāve never heard of a function being called entire out of complex analysis. But still, it (what? - ed.) is zero at i.ā
I believe those sentences can be paraphrased as, āThe term entire function is only used in complex analysis. The function f(z) = z^2 + 1 is zero at z = i.ā
Thanks, i donāt speak english natively
the poster is referring to the function
f(z) = z^2 + 1
Itās worth noting that, unlike a real function, a complex function that is differentiable in a neighborhood is infinitely differentiable in that neighborhood. An informal intuition behind this: in the reals, for a limit to exist, the left and right limit must agree. In C, the limit from every direction must agree. Thus, a limit existing in C is āstrongerā than it existing in R.
Edit: wikipedia pages on holomorphism and analyticity (did I spell this right) are good
entire always means holomorphic on the whole complex plane
deleted by creator
New paper on LLMs just dropped, titled LLMs Can Get āBrain Rotā!
Currently a novelty at this point, but could prove useful to make the likes of Iocaine and Nepenthes more effective - especially since the paper notes:
the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning.
It does also suggest doing some actual quality control to prevent damage to the LLMs, but that sure aināt happening
The paper is itself written by LLM.
Fuck.
Trump freed Binance fraudster, SBF pardon futures mooning rn

Does anyone else get flashbacks to that episode of the Powerpuff girls where the villain takes over the city and makes a law that ācrime is now legalā? Because that keeps popping into my head for some reason.
















