Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Pete Steinberger shares his OpenAI bill on Twitter. The headline number is $1.3 million in the last 30 days.
But in his (own) defense, it takes so many tokens to do so many bad ideas at once.
How many people, if they were given $1.3 million just once in their lifetime, would figure out far better uses for that money than this guy?
Coincidentally, it came up in conversation last night that the head of AI at Northeastern University makes $1.3 million a year (I donāt know where that number came from, but itās what I heard, and itās apparently the second-highest salary at the university, exceeded only by the presidentās).
you give me 1.3 million dollars and Iāll fuck off on a motorcycle for the rest of my natural life and that would still be a better value for the money than whatever the fuck this is.
Microsoft releases cost calculator for GitHub Copilot for the new token usage based billing. Previously you were being charged per request, kind of like hiring a cab and paying the same whether you went to the next corner or the next continent.
Turns out Zitron may have been seriously low balling the actual cost to subsidized billing ratio.


spoiler

zulip added slop to their codebase a long time ago (1, 2) but now theyāve released this bullshit blog post with some choice nonsense:
I seriously considered banning LLM use for Zulip contributions. But our view is that contributors should be allowed to use modern tools in the service of producing great, reviewable work. AI-assisted work is of course subject to the same rigorous review processes weāve always used for community contributions.
So we decided to invest in creating, refining, and enforcing a new AI use policy, which has the following key tenets:
- End-to-end human responsibility for work and the communication around it. You always need to understand, test, and explain the changes youāre proposing to make, whether or not you used an LLM as part of your process to produce them.
- Clear and concise communication about points that actually require discussion. While we allow carefully edited AI-generated PR descriptions, weāve had to ban AI-generated chat messages in the development community as too disruptive. Manual enforcement of this policy has been rough, with far more PRs closed without review, stern warnings, and outright bans of repeat offenders than weāve ever had to apply before. (What do you do when someone apologizes for submitting AI slop⦠by copy-pasting an apology from ChatGPT, including surrounding quotation marks?) We expect that next fall, automation or other major changes will be required for the PR triage process to be manageable.
The results [of using Claude] were promising (and far better than just a few months prior) ā enough for us to start investing in teaching Claude Code how to self-review its work, and how to produce PRs that are easy for maintainers to review. This has largely been an AI-supported process of digesting our contributor documentation into CLAUDE.md, and iterating when we see the model struggle.
i liked zulip š
Iām not going to start a punch-up with a dev team or maintainer who believes that AI tools can help good programmers do good work or whatever, but time and again we see that, just like crypto before it, you arenāt inviting good programmers to work with you. Youāre inviting the bros. AI bros and crypto bros are a specific type of Guy. Iām sure there were dotcom bros in the 90s. This is not a new problem, even if the current economic circumstances makes being this type of Guy more viable than ever, apparently.
Itās not just that the tech is bad (though it is bad), itās that itās uniquely privileged by culture and economics to empower the worst assortment of morons and grifters outside of Wall Street (and also inside of Wall Street, because of fucking course it does).
Upvoted but disliked
Hereās a nice example of LW brain (albeit heavily downvoted, so might be hard to get to):
https://www.lesswrong.com/posts/YiRsCfkJ2ERGpRpen/leogao-s-shortform?commentId=EJs4reRGEni73dxfC
Essentially, certain hereditary diseases are very rare, leading to less resources to find a cure, so the Big Brain Rationalist solution is to breed more people with the disease so it gets profitable to cure.
New(ish) Baldur Bjarnason - a fairly politically charged one at that, going into the US hegemony powering the current tech industry (and the AI bubble by extension), and how the Hormuz crisis is all-but guaranteed to topple the whole thing.
I particularly appreciate the argument he makes about the tech industry pivoting from creating value to exercising control. I disagree that this trend is specific to the tech industry, but with the possible exception of Monsanto they have been the most successful at it.
With the obvious failings of the American state to perform itās basic duties and the cross-pollination of the American political and corporate elites it seems plausible that at least some factions in the tech industry are awaiting an opportunity to take advantage of this weakness theyāve created and exercise that control over the functions of the state directly. I feel like I should be saying this into a webcam from behind a cartoonishly-large desk in between shilling for nutritional supplements, but Iād be lying if I said I didnāt fear what rough beast, itās hour come at last, slouches towards Bethlehem to be born.
Thereās a⦠robust debate about LLM slop submissions on everyoneās favorite boiled crustacean site.
First shot fired: a promptfondler suggest suppressing all comments pointing out that a submission reeks of slop by flagging them as āoff-topicā [1]
āThis is written by an LLMā comments should be flagged as off-topic (80 net upvotes, 139 comments)
Riposte: a suggestion that posing LLM generated content should be a bannable offence:
LLM generated submissions should be disallowed (274 net upvotes, 108 comments)
So far it looks as if the anti-slop forces have opinion on their side.
[1] short explanation of how flagging of comments work on lobste.rs - itās sort of a downvote, but the flagger has to chose from a list of reasons. If a commenter accrues enough flags theyāll get a red warning banner, and might possibly be banned as disruptive.
OK hereās a followup, which Iām putting out here as thereās probably a higher proportion of neurodivergent people here than in other fora I frequent
A commenter on lobste.rs states that being anti-LLM is effectively being against neurodivergent individuals, because many such individuals express themselves in prose in a way thatās indistinguishable from LLM output.
Is this a widespread viewpoint?
https://lobste.rs/s/wee21u/this_is_written_by_llm_comments_should_be#c_nadrad
I was trying to reply by way of linking a piece by Robert Kingett that had been shared here some time ago that, in excruciating detail and with righteous fury distilled to cold analysis, explained why AI is absolute shit for accessibility aids. His experience is in the realm of physical disability rather than neurodivergance, but that only makes the problems more starkly illustrated rather than unique.
Unfortunately I couldnāt find that piece, but I found this one and needed to explain to the kid why I randomly laughed out loud.
I recall seeing someone elsewhere on the fedi trying to drum up a point like that a few weeks ago, their complaint was something like āIāve been chased out of neurodivergent spaces for not being enough into LLMsā
No idea if their claim was true; I can definitely see the possibility of some ND neurotypes slanting more favourable, but nfi on the values
Not sure I buy the ground for that argument anyway tho. Lotta people used to smoke and society slapped all manner of regulation on that
I called it out as lies and bullshit, the poster asserted it was totally true and I asked for numbers to support this statistical claim.
And instead of providing numbers, they came back with an anecdote about university administrators being incompetent (which is deeply unsurprising and thus, in the Shannon sense, conveys no information).
this is obvious bullshit: theoretically, my writing is affected by two factors that might skew the assessment towards it having been generated by an llm: iām neurodivergent (adhd) and english is not my native language ā and i was never accused of using synthetic text generatorsā¦
hereās another commenter saying being against LLMs is being against the otherly abled:
(commenter is a notorious promptfondler)
AI is bad at everything, part infinity: AI transcription whitewashes 18th-century documents
In other Scott of Siskind news, he just posted an entirely unnecessary amount of words to aggressively push back against the adage that āall exponentials sooner or later turn into sigmoidsā as if it was by itself a load bearing claim of the side arguing against the direct imminence of the machine god.
Itās just a bunch of arguing by analogy ( āhelping you build intuitionā ) and you-canāt-really-knows while implying AI 2027 was very science much rigorous, but it also feels kind of desperate, like why are you bothering with this overperformative setting-the-record-straight thing, have you been feeling inadequate as an AI-curious stats fondler of note lately?
he just posted an entirely unnecessary amount of words
taking a quick look at it⦠itās actually short by Scottās standards, but still overly long, given that the only point he makes is claiming Lindyās Law is applicable to predicting AI progress in absence of other information. Edit: glancing at it again⦠its not that short, I kinda skimmed until I got to Scottās actual point my first time glancing at it. You canāt blame me for not reading it.
you-canāt-really-knows
Yeah, he straw-mans AI critics/skeptics as trying to make an argument from ignorance, then tries to argue against that strawman using Lindyās Law (which assumes ignorance and a pareto distribution). He completely ignores that AI critics are actually making detailed arguments about LLM companies consuming all the good and novel training data, hitting the limits on what compute costs they can afford, running into problems of the long lead time for building datacenters, etc. Which is pretty ironic given his AI 2027 makes a nominal claim to accounting for all that stuff (in actuality it basically all rests on METRās task horizons, and distorts even that already questionable dataset).
Building infinite compute is hard, man
As if LLMs being the last step before AGI/ASI/The Metal Messiah is a foregone conclusion. As far as I can tell even the AI 2027 thing only argues that once the bots completely nail down programming (any minute now) then the foom happens and the models will magic themselves into true AI, because apparently being good at solving coding problems is a sufficient proxy for superintelligence, hence the METR infatuation.
I mean, to be fair thatās not unique to them - software engineers have been worse than physicists in assuming that all of reality and human experience is downstream from their chosen field.
The idea of āthe exponential curve goes up foreverā has always been silly and an idea rooted in capitalism for me (āno bro you donāt get it weāre gonna get infinite money foreverā). Limited resources exist, and people are already very fed up with the ludicrous amounts of water and electricity data centres take up. Making bigger models that need to run for longer is also probably going to take an exponential amount of resources (and also make people hate you more).
Someone called Fran has a story of being sexually harassed at the Center for Effective Altruism (and assaulted in other communities).
Fran has done some really great writing on this, really admire her ability to deconstruct a community sheās fond of.
(for the record this is downvoted by the community, and the one helpful comment is slammed by OP)
im smarter than everyone else around me, especially those whiny feminists. why hasnāt society granted me a female to be my mate yet?
An lesswrong will literally do⦠whatever this is instead of going to therapy.
the reply is about as close to being nice and helpful as one could be, really
He probably paid a rationalist dating coach good money to tell him to do that.
least egotistical lesswronger
you know how sometimes people that werenāt exposed to religion as children sometimes convert and get really weird about it as adults (eg: the extremely online california tradcaths) and because they were never socialized in a religion they speedrun committing every medieval heresy? rationalism is that but for philosophy.
https://feed.hella.cheap/@bob/statuses/01KRM0NVXCFT80AVFBRSB1G6G4
Apparently, the American Physical Society is revising their AI policy to allow ābroader applicationsā than the ālight editingā they currently permit.
I currently have a review request sitting in my inbox from them. Iām thinking of using this as a reason to decline that request.
I would rather quit physics than accept the institutional endorsement of skill-destroying, environmentally disastrous fashtech.
looking very much forward to that crashing head first into arXiv threatening a ban if your chatbot fucks up in your name
I was pretty happy about seeing that news about arXiv! So much news has been various organizations giving into LLM usage like some kind of inevitability, so it was a nice change of pace.
It is this continuing slippage of standards that makes me appreciate a hard line against any and all genAI that place like awful.systems have. You concede one small usage and the boosters will keep pushing for more.
Yeah the first AI comes in all nice and friendly but if you dont toss them out before you know it you turn out to he an AI bar.
(Also noticed that a lot of āI just want some nuanced talksā friendly looking ai bros are not friendly at all when they keep getting pushback).
But I listened and agreed that you had serious concerns about certain aspects of this technology. I even agreed when you talked about how frustrating it was that specifically other people wanted to do bad things. I listened as you asked whether I had any options to address those concerns! What more do you want from me before you agree to let me do and say whatever I want!
Prompt goblins insist that weāre backward and irrelevant. Why do they crave our sweet delicious approval?
The plagiarism, massive expenditure of venture capital, and unreliable slop output are all intrinsic to the technology, and they hate to be reminded of that because there isnāt much they can do about it. From a technological standpoint, even locally run community fine-tuned open-weight models still originated from plagiarism and big corporate investments, and still output slop. From a social standpoint, the most the can do is try to claim legitimacy through consensus building and we are a threat to that.
itās not approval theyāre after, itās reaffirmation of faith
they want your data and freshwater
freshwater
This reminded me of a few old comic stories were eventually the robot/computer was partially running on blood.
(One of them was a judge dredd one where they had vampire robots who iirc used the blood to keep a president in suspended animation alive. Snap, Crackle and Pop, it had a suprisingly wholesome ending for a dredd comic).
AI is Hungry for Power and You Are Footing the Bill - Naked Capitalsim
Money spent on grid upgrades and tax breaks tied to them means fewer resources for things people actually need, like schools, public transit, local infrastructure, or basic community services that make life more affordable and stable.
Even if youāve never touched an AI model in your life, youāre going to pony up for it.
i want to speak to the manager of storytelling
(found at https://blacksky.community/profile/did:plc:x2muxxe5t25hckf22sk25ocf/post/3mlobs4uq422l)

No one is stopping any one from editing out jar jar, if they care that much, just do it. Put up or shut up. /s
George lucas has entered the chat.
This may be code for āI donāt want to see uppity women, brown people, and queer people in my shows.ā
One of the motivations for fanfiction is that people want more āfillerā. They like the characters and (often) the world those characters inhabit, and so they write a story that lets them (and other fans) spend more time with the fiction.
The whole slice-of-life subgenre is all about this. No real conflict or plot, just scenes of the characters existing in their world. My wife both reads and writes that kind of thing and let me tell you the level of research and worldbuilding that goes into writing a simple meal scene or whatever.
So in highschool, I was one of those annoying kids that went āwhy do we have to learn how to analyze poems? Weāre never gonna need this in real lifeā in English (well⦠German, but doesnāt matter) class.
Iām deeply grateful for my teachers back then to patiently get me to do these things anyways, because there came a point in my life years later where I suddenly understood that those āuselessā lessons and hours āwastedā analyzing Goethe and Borchert and Fitzgerald handed me the tools to understand media (and not just literature!) instead of just consuming it.
I hope itās clear how that relates to the screenshot. More than that though, I sometimes feel like the slew of shit media over the past decade is at least in part to blame on writers/studios/⦠now assuming people do in fact merely consume. But thatās a rant thatās completely off-topic here, so Iāll shut up now.










