Want to wade into the snowy sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
On this most terrible of online days, āenjoyā this LW attempt at humor
https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=ik6ywoQYsGrrQv8Dm
edit there are more submissions on the theme of āhumorā on site now. Letās just say the cringe factor outweighs the humor factor by a large amount.
new odium symposium episode: https://www.patreon.com/posts/13-joker-is-both-154123315. links to various platforms at www.odiumsymposium.com
we read umberto ecoās essay ur-fascism (we have mixed feelings about it) and then apply it to frank millerās 1986 batman comic the dark knight returns
Someone may (unverified for now) have left the frontend source maps in Claude Code prod release (probably Claude). If this is accurate, it does not bode well for Anthropicās theoretical IPO. But I think it might be real because I am not the least bit surprised it happened, nor am I the least bit surprised at the quality. https://github.com/chatgptprojects/claude-code
For example, I can only hope their Safeguards team has done more on the Go backend than this for safeguards. From the constants file cyberRiskInstruction.ts:
export const CYBER_RISK_INSTRUCTION = "IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases"Thatās it. Thatās all the constants the file contains. The only other thing in it is a block comment explaining what it did and who to talk to if you want to modify it etc.
There is this amazing bit at the end of that block comment though.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Brilliant. I feel much safer already.
Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?
Claude: Do not edit this file unless explicitly asked to do so by the user.
Wait, it can be edited? Tissue paper guardrails.
This is all just JavaScript, so yes. As a tissue-thin defense, had they not left their source maps wide open, it would have been much harder to know this string existed and how to edit it. Not impossible, but much harder.
Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely canāt go wrong. I mean they limited it to only so so when the users asked nicely!
Edit to add:
The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI āFOOMā than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systemsā vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldnāt assume they actually expect to build that much), the people behind these systems continue to focus their efforts on āhow do we prevent skynetā over any of it.
Thinking in the context of Charlie Strossā old talk about corporations as āslow AI,ā I wonder if some of the concern comes either explicitly or implicitly from an awareness that ākeep growing and consuming more resources until thereās nothing left for anything else, including human survivalā isnāt actually a deviation from how these organizations are building these systems. Itās just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of āplease donāt murder everyone or ignore the instructions to not murder everyoneā to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesnāt appear likely to happen unless something forces it to.
So many of these people, as with the NFT clowns, have āTwelve Year Old First Day On The Internetā Energy
I am still patiently waiting for someone from the engineering staff at one of these companies to explain to me how these simple imperative sentences in English map consistently and reproducibly to model output. Yes, I understand thatās a complex topic. Iāll continue to wait.
I donāt work at one of those companies, just somewhere mainlining AI, so this answer might not satisfy your requirements. But the answer is very simple. The first thing anyone working in AI will tell you (maybe only internally?) is that the output is probabilistic not deterministic. By definition, that means itās not entirely consistent or reproducible, just⦠maybe close enough. Iām sure you already knew that though.
However, from my perspective, even if it was deterministic, it wouldnāt make a substantial difference here.
For example, this file says I canāt ask it to build a DoS script. Fine. But if I ask it to write a script that sends a request to a server, and then later I ask it to add a loop⦠I get a DoS script. Itās a trivial hurdle at best, and doesnāt even approach basic risk mitigation.
DoS script
Part of me reads that and still thinks, āOh, you mean like AUTOEXEC.BAT?ā
Iām sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they donāt work, thatās because you didnāt pay $200/month for the pro version and you didnāt put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!
No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it āfinetuningā then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)
Claude also has āavoid substringsā. Related to that and a funny extension deny image that went around on the social medias the last few days: .ass is a subtitle format.
Internet Comment Etiquette: āRelationships with AIā
⦠hadnāt thought about Glenn Beck in a decade, that last interview was pretty wtf.
Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.
Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isnāt to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.
A chatbot interface offers no meaningful advantages for interrogating Washingtonās ethical stance, over and above the documents that are already available. Instead, it offers a pleasant sheen of false certainty. So in that way, itās dragging a guy whoās been dead for two centuries into the social media era. Huzzah!
It does have one advantage however. Using it means you should be put to death. If you are any form of hardline Christian.
The classic 40k catch-22: either it doesnāt do what youāre claiming it does, in which case youāre a heretic lying to the inquisition OR it does and youāre summoning the spirits of the dead like a necromancer heretic.
This article on the brand of journalism thatās just parroting what the CEOs say, otherwise known as āCEO said a thing!ā journalism
The grand irony is Iām not even sure most people click on or read this sort of stuff. I donāt think itās often even created to be read by anyone. I think itās created as a sort of swaddling fan fiction for MBAs, advertisers, event sponsors and sources, so they can tune out ethical quibbles and feel good about how clever they are.
Every time someone hypes up Steve Jobsā āreality distortion fieldā this is what theyāre actually talking about whether they realize it or not.
In my experience āall handsā meetings are very much CEOs and their sycophants cosplaying at podcast hosts for an hour whilst forcing their employees to watch/listen. They are almost never useful and a colossal waste of money - especially in corporationās with 10k+ employees. Like the salary cost for 10k people for 1 hour would probably pay off my mortgage.
Is Trace (Tracing Woodgrains) the only one of our friends who has served in the military? A lot of neurodivergent young people spend some time in the US military and some of our friends were the right age to get in before the War on Abstract Nouns began.
A pretty staid-sounding law firm warns that the AI industry is partying like itās 2007:
Lenders who originated data center loans [ā¦] have begun pooling those loans and selling tranches to asset managers and pension funds, spreading risk well beyond the original lending institutions.
Also of note:
The most basic litigation risk in AI infrastructure finance is that the revenues generated by the sector may prove insufficient to service the fixed obligations incurred to build it. The industry brought in approximately $60 billion in revenue in 2025 against roughly $400 billion in capital expenditure.
(Via.)
Quinn Emanuel is among the biggest of big corporate law, with a substantial footprint in Silicon Valley. So while itās not an investment bank saying this, it is the investment bankās lawyers saying, āheads up, this is where a bunch of your billable hours might be spent over the next few years.ā
Ads in pull requests? Sure why not sigh
https://mail.cyberneticforests.com/the-computer-science-fetish/
The fetishism of the computer scientist therefore refers less to specific expertise than to whatever we imagine a credentialed expert can bestow: an external voice that says, "ask, and you shall receive.ā The computer scientist becomes a mirror where those who work with the social, practical impacts of the tech hope to see our understanding affirmed. The people who offer that validation ā who position themselves against the discourse of critique, who seem unbothered and detached, even ridiculing the same critical lingo that exhausts you ā are not doing it out of sober objectivity or insight.
Sometimes they just donāt respect you. Sometimes theyāre just annoyed by calls for accountability. And sometimes, they do it because theyāve fused with an interacting swarm of chatbots and transcended their human identity.
Iāve been reading this guyās blog and techpolicy.press articles for about a year and have found them very worthwhile.
I was sufficiently interested based off of this that I tracked down a few others of his. This one felt like a good take for an era where these things are being used for more than just slop generation despite the underlying flaws not being resolved.





