

Probably one part normalisation, one part AI supporters throwing tantrums when people donāt treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.
he/they


Probably one part normalisation, one part AI supporters throwing tantrums when people donāt treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.


Checked back on the smoldering dumpster fire that is Framework today.
Linux Community Ambassadors Tommi and Fraxinas have jumped ship, sneering the companyās fash turn on the way out.


theyāll just heat up a metal heat sink per request and then eject that into the sun
I know youāre joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copperās great for handling heat). Far as I can tell:
The sun isnāt hot enough or big enough to fuse anything heavier than hydrogen, so the copperās gonna be doing jack shit when it gets dumped into the core
Fusing elements heavier than iron loses you energy rather than gaining it, and copperās a heavier element than iron (atomic number of 29, compared to ironās 26), so the copper undergoing fusion is a bad thing
The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)
So, this ideaās fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but thatās a question for another day.


The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the gameās Codex (āStarships: Heat Managementā in the Secondary section, if youāre looking for it).
That was for a series of sci-fi RPGs which havenāt had a new installment since 2017, and yet nobodyās bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.


It also integrates Stake into your IDE, so you can ruin yourself financially whilst ruining the companyās codebase with AI garbage


āyou can set the sycophancy engines so they arenāt sycophancy enginesā
Iāll take āShit thatās Impossibleā for 500, Alex


I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?
Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligenceā¢). There will likely be a contingent of true believers even after the bubbleās burst, but the vast majority of people will respond to the question of āCan machines think?ā with a resounding ānoā.
AIās usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AIās coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.


Checked back in on the ongoing Framework dumpster fire - Project Bluefinās quietly cut ties, and the DHH connection is the reason why.


This entire newsstory sounds like the plotline for a rejected Captain Planet episode. What the fuck.


A judge has given George RR Martin the green light to sue OpenAI for copyright infringement.
We are now one step closer to the courts declaring open season on the slop-bots. Unsurprisingly, thereās jubilation on Bluesky.


Decided to check the Grokipedia āarticleā on the Muskrat out of morbid curiosity.
I havenāt seen anything this fawning since that one YouTube video which called him, and I quote its title directly, āThe guy who is saving the worldā.


I have a nasty feeling thereās a lot of ordinary people who are desperate to throw their money away on OpenAI stock. Itās the AI company! The flagship of the AI bubble! AIās here to stay, you know! OpenAI? Sure bet!
Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public. (I recommend checking out This Is Financial Advice if you want a deep-dive into the GameStop apes, it is a trip)
One really bad consequence this deal just opened the gates to is to make it much easier for corporations to gut charities. A proper charity can run very like a business, but it gets a lot of free rides ā and it can grow into quite the juicy plum. The California and Delaware decisions on OpenAI are precedents for large investors to come in and drain a charity if they say the right forms of words. I predict that will become a problem.
ā¦why do I get the feeling companies are gonna start immediately gutting charities once the bubble pops


The follow-upās worth mentioning too:
Itās interesting theyāre citing specifically DHH and Ladybird as examples to follow, considering:
https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html


Performing the SPARTAN Programās original aim, sir.


Baldur Bjarnasonās (indirectly) given his thoughts on the piece, treating its existence (and the subsequent fallout) as a cautionary tale on why journalistic practices exist and how conflicts of interest can come back to haunt you.
(In particular, Baldur notes that Zitron couldāve nipped this problem in the bud by firing his AI-related clients after he became the premier AI critic.)


OpenAIās data stealing scheme disguised as a browser can be prompt injected. In other news, water is wet.
EDIT: How did I not notice I was referring to OpenAI as ChatGPT (anyways, fixed it now)


Watched Once Upon A Time in Space recently - pretty damn good documentary series.


Trump Administration Providing Weapons Grade Plutonium to Sam Altman
The āWeapons Gradeā part is almost certainly editorializing (hopefully), but this whole shit sounds like another Chernobyl waiting to happen


so is the moral decline a side effect, or technocapitalism working as designed.
AI is an accountability sink by design, its technocapitalism working as designed
If even a single case pops up, Iād be surprised - AFAIK, cybercriminals are exclusively using AI as a social engineering tool (e.g. voice cloning scams, AI-extruded phishing emails, etcetera). Humans are the weakest part of any cybersec system, after all.
Given AIās track record on security, that sounds like an easy way to become an enticing target.