Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Thereās a Charles Stross novel where cultists take over the US government and begin a project to build enough computational capacity to summon horrors from beyond space-time (in space). Itās called The Labyrinth Index and itās very good!
So anyway, this happened:
https://www.wsj.com/tech/ai/openai-isnt-yet-working-toward-an-ipo-cfo-says-58037472
Also, this:
Whatās a government backstop, and does it happen often? It sounds like theyāre asking for a preemptive bail-out.
I checked the rest of Zitronās feed before posting and its weirder in context:
Interview:
She also hinted at a role for the US government āto backstop the guarantee that allows the financing to happenā, but did not elaborate on how this would work.
Later at the jobsite:
I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word ābackstopā and it mudlled the point.
She then proceeds to explain she just meant that the government āshould play its partā.
Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit
Whatās a government backstop, and does it happen often? It sounds like theyāre asking for a preemptive bail-out.
Zitronās stated multiple times a bailout isnāt coming, but Iām not ruling it out myself - AI has proven highly useful as a propaganda tool and an accountability sink, the oligarchs in office have good reason to keep it alive.
exuberance
Truly a rightwing tech, after getting all the attention, money and data they now are mad people dont love it enough.
āI donāt think thereās enough exuberance about AIā
Tinkerbell needs you all to wish harder, boys and girls
@Architeuthis @o7___o7
āI donāt think thereās enough exuberance about AI"? Wow.
So, today in AI hype, we are going back to chess engines!

Ethan pumping AI-2027 author Daniel K here, so you know this has been āThOrOuGHly ReSeARcHeDā ā¢
Taking it at face value, I thought this was quite shocking! Beating a super GM with queen odds seems impossible for the best engines that I know of!! But the first * here is that the chart presented is not classical format. Still, QRR odds beating 1600 players seems very strange, even if weird time odds shenanigans are happening. So I tried this myself and to my surprise, I went 3-0 against Lc0 in different odds QRR, QR, QN, which now means according to this absolutely laughable chart that I am comparable to a 2200+ player!
(Spoiler: I am very much NOT a 2200 player⦠or a 2000 player⦠or a 1600 player)
And to my complete lack of surprise, this chart crime originated in a LW post creator commenting here w/ āpls do not share this without context, I think the data might be flawedā due to small sample size for higher elos and also the fact that people are probably playing until they get their first win and then stopping.
Luckily absolute garbage methodologies will not stop Daniel K from sharing the latest in Chess engine news.

But wait, why are LWers obsessed with the latest Chess engine results? Ofc its because they want to make some point about AI escaping human control even if humans start with a material advantage. We are going back to Legacy Yud posting with this one my friends. Applying RL to chess is a straight shot to applying RL to skynet to checkmate humanity. You have been warned!
LW link below if anyone wants to stare into the abyss.
https://www.lesswrong.com/posts/eQvNBwaxyqQ5GAdyx/some-data-from-leelapieceodds
One of the core beliefs of rationalism is that Intelligence⢠is the sole determinant of outcomes, overriding resource imbalances, structural factors, or even just plain old luck. For example, since Elon Musk is so rich, that must be because he is very Intelligentā¢, despite all of the demonstrably idiotic things he has said over the years. So, even in an artificial scenario like chess, they cannot accept the fact that no amount of Intelligence⢠can make up for a large material imbalance between the players.
There was a sneer two years ago about this exact question. I canāt blame the rationalists though. The concept of using external sources outside of their bubble is quite unfamiliar to them.
two years ago
šŖ¦šØš¼ā”ļøš“š¼
since Elon Musk is so rich, that must be because he is very Intelligentā¢
Will never be able to understand why these mfs donāt see this as the unga bunga stupid ass caveman belief that it is.
cos it implies that my overvalued salary as an IT monkey fo parasite companies of no social value is not because I sold my soul to capital owners, itās because Iāve always been a special little boy who got gold stars in school
@swlabr @lagrangeinterpolator Rat calvinism lol
I was wondering why Eliezer picked chess of all things in his latest āparableā. Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.
in terms of zitron fallout, there used to be a comment section at his blog, itās not there anymore
Huh, what happened? Would you mind linking some more details?
previous stubsack https://awful.systems/comment/9235549 i donāt think too hard about it, because to a degree all pr people are professional liars in the first place, but bluesky didnāt like it
fyi over the last couple of days firefox added perplexity as search engine, must have been as an update
A redditor posted the latest Pivot to AI propaganda to r/betteroffline, where it currently has around 560 votes. This upset and confused a great many prompt enthusiasts in the comments, which goes to show that a kicked dog yelps.
Pls dont kick dogs š
ITT: new synonym for promptfondler: ābrain cuckā
Eh, cuck is kind of the right-wingerās word, itās tied to their inceldom and their mix of moral-panic and fetishization of minoritiesā sexualities.
Sure. Not advocating for its usage. Just got a kick out of seeing it.
Unstoppable IP enforcers meet unmovable slop generator?
Plus, the authors currently suing OpenAI have gotten their hands on emails and internal Slack messages discussing their deletion of the LibGen dataset - a development which opens the company up to much higher damages and sanctions from the court for destroying evidence.
Still think it is wild they used the libgen dataset(s) and basically gotten away with it apart from some minor damages only for US publishers (who actually registered their copyright). Even more so as my provider blocks libgen etc.
Google is space data-center curious, too:
https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/
Part of me wants to see Google actually try this and get publicly humiliated by their nonexistent understanding of physics, part of me dreads the fact itāll dump even more fucking junk into space.
how do you want to get butlerian jihad without kessler syndrome?
Considering weāve already got a burgeoning Luddite movement thatās been kicked into high gear by the AI bubble, Iād personally like to see an outgrowth of that movement be what ultimately kicks it off.
There were already some signs of this back in August, when anti-AI protesters vandalised cars and left āButlerian Jihadā leaflets outside a pro-AI business meetup in Portland.
Alternatively, I can see the Jihad kicking off as part of an environmentalist movement - to directly quote Baldur Bjarnason:
[AI has] turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.
I wouldnāt rule out an artist-led movement being how the Jihad starts, either - between the AI industry ādirectly promising to destroy their industry, their work, and their communitiesā (to quote Baldur again), and the open and unrelenting contempt AI boosters have shown for art and artists, artists in general have plenty of reason to see AI as an existential threat to their craft and/or a show of hatred for who they are.
i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what youāre going for

but yeah nah i donāt think itās gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators wonāt be as much of a problem. propaganda use and scams will remain i guess
i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what youāre going for
Iāll admit to taking your previous comment too literally here - I tend to assume people are completely serious unless I can clearly tell otherwise.
but yeah nah i donāt think itās gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators wonāt be as much of a problem. propaganda use and scams will remain i guess
Scams and propaganda will absolutely remain a problem going forward - LLMs are tailor-made to flood the zone with shit (good news for propagandists), and AI tools will provide scammers with plenty of useful tools for deception.
The ādaylight as a space-based serviceā bullshit is even worse.
wild article about content scraping nonprofit common crawl
tl;dr theyāve been faking deleting data upon request (in ways that I find very funny) and their head is noxious even for a tech bro
also is it just me or does SV have a particular gift for perverting the nonprofit concept
@sc_griffith @BlueMonday1984 It enrages me that early on in the article, the founder states that āFair useā a US construct for US copyright law only, means they can apply it to the Worlds data. The USA signed up to the Berne convention. Itās imperfect, but dammit, the signatories are meant to uphold copyrights of every country who signed up. Not ignore it and decide US copyright is the only law.
Aaand breathe.
He said that Common Crawl is āmaking an earnest effortā to remove content but that the file format in which Common Crawl stores its archives is meant āto be immutable. You canāt delete anything from it.ā
makes me wonder if itās some crypto hangover
In 2023, he sent a letter urging the U.S. Copyright Office not āto hinder the development of intelligent machinesā and included two illustrations of robots reading books.
cheerleaders for creepiest weirdos in sv try to deflect criticism by becoming impossible to parody
sv does have for some time a peculiar understanding of this and also some other terms, like āconsentā, āownershipā, āprivacyā, āsafetyā,
wasnāt common crawl the one that pulled a similar trick to googās āif you label a thing as $x we wonāt include youā[0]? I could swear I heard their name in association with some derpshit intake management stuff above and beyond the typical fundamental āfree/open scraper setā problems
[0] - a tactic google first pulled with Streetview cars pulling in a pile of wifi beacons and tying it to location - āif you donāt want it just rename your AP to ā{prefix} - {apname}āā. a reply that was just dumb and aggravating but also it fucking sucks that basically no standards have taken this problem to heart in the ~15y hence
Found a high quality sneer of OpenAI from Los Angeles Review of Books: Literature Is Not a Vibe: On ChatGPT and the Humanities
Big Yud posts another ābangerā[1], and for once the target audience isnāt impressed:
https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect#comments
I skimmed it. Itās terrible. Itās a long-winded parable about some middling chess player whoās convinced heās actually good, and a Socratic strawman in the form of a young woman who needles him.
Contains such Austean gems as this
If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington ā measured it very carefully, and with sufficiently fine instrumentation ā it might have been found to travel faster than the speed of light in vacuum.
In the end, both strawmen are killed by AI-controlled mosquito drones, leaving everyone else feeling relieved .
Commenters seem miffed that Yud isnāt cleaning up his act and writing more coherently so as to warn the world of Big Bad AI, but apparently he just canāt help himself.
[1] if by banger you mean a long, tedious turd. 42 minute read!
Some juicy extracts:
Soon enough then the appointed day came to pass, that Mr. Assi began playing some of the townās players, defeating them all without exception. Mr. Assi did sometimes let some of the youngest children take a piece or two, of his, and get very excited about that, but he did not go so far as to let them win. It wasnāt even so much that Mr. Assi had his pride, although he did, but that he also had his honesty; Mr. Assi would have felt bad about deceiving anyone in that way, even a child, almost as if children were people.
Yud: āWoe is me, a child who was lied to!ā
Tessa sighed performatively. āIt really is a classic midwit trap, Mr. Humman, to be smart enough to spout out words about possible complications, until youāve counterargued any truth you donāt want to hear. But not smart enough to know how to think through those complications, and see how the unpleasant truth is true anyways, after all the realistic details are taken into account.ā [ā¦] āWhy, of course itās the same,ā said Mr. Humman. āYouād know that for yourself, if you were a top-tier chess-player. The thing youāre not realizing, young lady, is that no matter how many fancy words you use, they wonāt be as complicated as real reality, which is infinitely complicated. And therefore, all these things you are saying, which are less than infinitely complicated, must be wrong.ā
Your flaw dear Yud isnāt that your thoughts cannot out-compete the complexity of reality, itās that itās a new complexity untethered from the original. Retorts to you wild sci-fi speculations are just minor complications brought by midwits, you very often get the science critically wrong, but expect to still be taken seriously! (One might say you share a lot of Humman misquoting and misapplying āecon 101ā. )
āLook, Mr. Humman. You may not be the best chess-player in the world, but you are above average. [⦠Blah blah IQ blah blah ā¦] You ought to be smart enough to understand this idea.ā
Funilly enough the very best chess players like Nakamura or Carlsen will readily call themselves dumbasses outside of chess.
āWell, by coincidence, that is sort of the topic of the book Iām reading now,ā said Tessa. āItās about Artificial Intelligence ā artificial super-intelligence, rather. The authors say that if anyone on Earth builds anything like that, everyone everywhere will die. All at the same time, they obviously mean. And that book is a few years old, now! Iām a little worried about all the things the news is saying, about AI and AI companies, and I think everyone else should be a little worried too.ā
Of course this a meandering plug to his book!
āThe authors donāt mean it as a joke, and I donāt think everyone dying is actually funny,ā said the woman, allowing just enough emotion into her voice to make it clear that the early death of her and her family and everyone she knew was not a socially acceptable thing to find funny. āWhy is it obviously wrong?ā
They arenāt laughing at everyone dying, theyāre laughing at you. I would be more charitable with you if the religion you cultivate was not so dangerous, most of your anguish is self-inflicted.
āSo thereās no sense in which youāre smarter than a squirrel?ā she said. āBecause by default, any vaguely plausible sequence of words that sounds it can prove that machine superintelligence canāt possibly be smarter than a human, will prove too much, and will also argue that a human canāt be smarter than a squirrel.ā
Importantly you often portray ASI as being able to manipulate humans into doing any number of random shit, and you have an unhealthy association of intelligence with manipulation. Iām quite certain I couldnāt get at squirrel to do anything I wanted.
"Youāre not worried about how an ASI [ā¦] beyond what humans have in the way of vision and hearing and spatial visualization of 3D rotating shapes.
Is that⦠an incel shape-rotator reference?
Yud: āWoe is me, a child who was lied to!ā
He really canāt let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among itās many digressions, had an aside about how dath ilan raises itās children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main characterās chemistry and math lectures.)
Of course this a meandering plug to his book!
Yup, now that he has a book out heās going to keep referencing back to it and itās being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)
Is that⦠an incel shape-rotator reference?
I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasnāt aware the term had ties to the incel community. (But it wouldnāt surprise me that much.)
If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington ā measured it very carefully, and with sufficiently fine instrumentation ā it might have been found to travel faster than the speed of light in vacuum.
How do you write like this? How do you pick a normal joking observation and then add more words to make it worse?
How do you write like this?
The first step is not to have an editor. The second step is to marinate for nearly two decades in a cult growth medium that venerates you for not having an editor.
I couldnāt even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.
This isnāt the only obnoxiously heavy handed āparableā heās written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius
Even the lesswrongerās are kind of questioning the point:
I enjoyed this, but donāt think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who arenāt already on board with the argument.
A dialogue that references Stanislaw Lemās Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).
Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the āwho reads this stuffā bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.
Who does he think heās convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isnāt convincing. But instead heās stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).
First comment: āthe world is bottlenecked by people who just donāt get the simple and obvious fact that we should sort everyone by IQ and decide their future with itā
No, the world is bottlenecked by idiots who treat everything as an optimization problem.
@sinedpick @awful.systems @gerikson @awful.systems
The world is hamstrung by people who only believe there is one kind of intelligence, it can be measured linearly, and it is the sole determinant of human value.
The Venn diagram of these people and closet eugenicists looks like a circle if you squint at it.
42 minute read
Maybe if youāre a scrub. 19 minutes baby!!! And that included the minute or so that I thought about copypasting it into a text editor so I could highlight portions to sneer at. Best part of this story is that it is chess themed and takes place in āSkewersā, Washington, vs. āForksā, Washington, as made famous by Twilight.
Anyway, what a pile of shit. I choose not to read Yudās stuff most of the time, but I felt that I might do this one. What do you get if you mix smashboards, goofus and gallant strips, that copypasta about needing a high IQ to like rick and morty, and the worst aspects of woody allen? This!
My summary:
Part 1. A chess player, āMr. Hummanā, plays a match against āMr. Assiā and loses. He has a conversation with a romantic interest, āSocratessaā, or Tessa for short, about whether or not you can say if someone is better than another in chess. Often cited examples of other players are āMr. Chimzeeā and āMr. Neumannā.
Both āHummanā and āSocratessaā are strawmen. āSocratessaā is described as thus:
One of the less polite young ladies of the town, whom some might have called a troll,
Humman, of course, talks down to her, like so:
āOh, my dear young lady,ā Mr. Humman said, quite kindly as was his habit when talking to pretty women potentially inside his self-assessed strike zone
I hate to give credit to Yud here for anything, so hereās what Iāll say: This characterisation of Humman is so douchey that itās completely transparent that Yud doesnāt want you to like this guy. Yudās methodology was to have Humman make strawman-level arguments and portray him as kind of a creep. However, I think what actually happened is that Yud has accidentally replicated arguments/johns you might hear from a smash scrub about why they are not a scrub, but are actually a good player, just with a veneer of chess. So I donāt like this character, but not because of Yudās intent.
Socratessa (Tessa for short) is, as gerikson points out, is a Socratic strawman. Thatās it. Itās unclear why Yud describes her as either a troll or pretty. He should have just said she was gallant.* She argues that Elo ratings exist and are good enough at predicting whether one player will beat another. Of course, Humman disagrees, and as the goofus, must be wrong.*
The story should end here, as it has fulfilled its mission as an obvious analog to Yudās whole thing about whether or not you can measure intelligence or say someone is smarter than another.
Part 2. Humman and Socratessa argue about whether or not you can measure intelligence or say someone is smarter than another.
E: if you were wondering, yes, there is eugenics in the story.
E2: forgot to tie up some allusions, specifically the g&g of it all. Marked added sentences with a *.
I hope Yud doesnāt mind if I borrow Mr. Assi for my upcoming epic crossover fic, āNaruto and Batman Stop the Poo-Pocalypseā
Wait a minute, what do you mean, itās not supposed to be that kind of ass?
eugenics
Yes, the bit about John von Neumann sounds like he is stuck in the 1990s: āthere must be a gene for everything!ā not today āwow genomes are vast interconnected systems and individual genes get turned on and off by environmental factors and interventions often have the reverse effect we expect.ā Scott Alexander wrote an essay admiring the Hungarian physics geniuses and tutoring.
yudās scientific model is aristotlean, i.e. he thinks of things he thinks should be true, then rejects counter-evidence with a bayesian cudgel or claims of academic conspiracy. So yeah genes are feature flags, why wouldnt they be (and eugenics is just SRE ig)
Meanwhile he objects to people theorycrafting objections (Tessaās dialogue about the midwit trap and an article for the Cato Institute called āIs that your true rejection?ā) That is an issue in casual conversations, but professionals work through these possibilities in detail and make a case that they can be overcome. Those cases often include past experience completing similar projects as well as theory. A very important part of becoming a professional is learning to spot āthat requires a perpetual motion machine,ā āthat implies P = NP,ā āthat requires assuming that the sources we have are a random sample of what once existedā and not getting lost in the details; another is becoming part of a community of practitioners who criticize each other.
and donāt even get me started on splice variants
Yeah, after establishing a deeply tortured chess metaphor and beating it to death and beyond, Yud proceeds to just straight-up removed about how nobody is taking his book seriously. It just fucking keeps going even as it dips into the most pathetic and hateful eugenics part of their whole ideology because of course it does.
āOutsiders arenāt agreeing with me. I must return to the cult and torture my flock with more sermons.ā type shit
deleted by creator
The dumb strawman protagonist is called āMr. Hummanā and the ASI villain is called āMr. Assiā. I donāt think any parody writer trying to make fun of rationalist writing could come up with something this bad.
The funniest comment is the one pointing out how Eliezer screws up so many basic facts about chess that even an amateur player can see all the problems. Now, if only the commenter looked around a little further and realized that Eliezer is bullshitting about everything else as well.
Letās not forget that the socratic strawwoman is named āSocratessaā
Anyone knows whoās (presumably) Tor from the āTorās Cabinet of Curiositiesā Youtube channel and whatās up with his ideological commitments? Somebody recommended me this video on some Wikipedia grifter, I was enjoying it until suddenly (ca. 23:20 ) he name-drops Scott Alexander as āa writer whom Iām a big fan ofā. I thought, should somebody tell him. Then I looked up and the guy has an entire video on subtypes of rationalists, so he knows, and chose to present as a fan anyway. Huh. However as far as a cursory glance goes the channel doesnāt seem to bat for, you know, āhuman biodiversityā. (I havenāt watched the rat video because I donāt want to ruin my week)
The rat video starts with him proclaiming that in rationalism he āfound his peopleā, that was the point where I bailed.
NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Googleās āBig Sleepā (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpegās security page there have been around 24 bigsleep reports fixed.
ffmpeg pointed out a lot of stuff along the lines of:
- They are volunteers
- They have not enough money
- Certain companies that do use ffmpeg and file security reports also have a lot of money
- Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
- Their product has no warranty
- Reviewing LLM generated security bugs royally sucks
- Theyāre really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
- Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
- Think it could be more secure? Patches welcome.
- They did fix the security report
- They do take security reports seriously
- You should not run ffmpeg āin productionā if you donāt know what youāre doing.
All very reasonable points but with the reactions to their tweets youād think they had proposed killing puppies or something.
A lot of people seem to forget this part of open source software licenses:
BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW
Or that venerable old C code will have memory safety issues for that matter.
Itās weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need⦠and oh gee a lot of those improvements could be upstreamed!
For a moment there I was worried that ffmpeg had turned fash.
Anyway, amazing job ffmpeg, great responses. No notes
The ffmpeg social media maintainer is an Elon fan so when he purchased Twitter and made foolish remarks about rewriting it all in C and how only hardcore programmers are cool that write C/assembly they quickly jumped on it.
https://xcancel.com/FFmpeg/status/1598655873097912320
Ya maybe itās a way to attract more contributors or donation money. Felt a bit weird after Elon was shitting on all the people who built Twitter and firing them.
ššš
ššš
More wiki drama: Jimbo tries to both sides the gaza genocide
E: just for clarity. Jimbo is the canon nickname of founder Jimmy Wales.
And just to describe a little more of what has happened, as far as I can tell: Wales is reportedly being interviewed about Wikipedia (probably due to the grookiepedia stuff). He was asked in a āhigh profile media interviewā (his words, see first link) about the Gaza genocide article, and said that it āfails to meet our high standards and needs immediate attentionā. Part of that attention is that theyāve locked the article, and Jimbo has joined the talk page. His argument probably boils down to this comment he left:
Letās start with this quote from WP:NPOV: āAvoid stating seriously contested assertions as facts. If different reliable sources make conflicting assertions about a matter, treat these assertions as opinions rather than facts, and do not present them as direct statements.ā Surely you arenāt going to argue that the core assertion of the article is not seriously contested?
The ācore assertionā is contained in the lede:
The Gaza genocide is the ongoing, intentional, and systematic destruction of the Palestinian people in the Gaza Strip carried out by Israel during the Gaza war.
i.e. that there is a genocide happening at all.
Gizmodo article, in case this comment sucks in some way and you wanted to read a different report.
Watching another rationalist type on twitter become addicted to meth. You guys werenāt joking.
(no idea who - just going by the subtweets).
Boss at new job just told me weāre going all-in on AI and I need to take a core role in the project
They want to give LLMs access to our wildly insecure mass of SQL servers filled with numeric data
Security a non factor
šš«
Sounds like the thing to do is to say yes boss, get Baldur Bjarnasonās book on business risks and talk to legal, then discover some concerns that just need the bossā sign-off in writing.
Heartbreaking: I work in the cesspool called the Indian tech industry
They will stonewall me and move forward regardless. Iām going to do what I can, raise a stink and polish my CV


















