Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
Dan Olson finds a cursed subreddit:
R/aitubers is all the entitlement of NewTubers but exclusively for people openly churning out slop.
āIāve automated 2-4 videos daily, zero human intervention, I spend a half hour a week working on this, why am I not getting paid yet?ā
Iāve been running my YouTube channel for about 3 months. Itās focused on JavaScript and React tutorials, with 2ā4 videos uploaded daily. The videos are fully automated (AI-generated with clear explanations, code demos, and screen recordings).
Right now:
-
Each video gets only a few views (1ā10 views).
-
I tried Google Ads ($200 spent) ā got ~20 subscribers and ~20 hours of watch time.
-
The Google campaigns brought thousands of uncounted views, and the number of Likes was much higher than dislikes.
-
Tried Facebook/Reddit groups ā but most donāt allow video posting, or posts get very low engagement.
My goal is to reach YPP within 6 months, but the current pace is not enough. Iām investing about $300/month in promotion and I can spend 30 minutes weekly myself.
š What would you suggest as the most effective strategy to actually get there?
I decided to look deeper into that subreddit, and I found the most utterly cursed sentence Iāve read all week (coming from the aptly-titled āWhy 99% of YouTubers Fail (And How to Be the 1% That Doesnāt)ā):
If that doesnāt sum up everything wrong with AI sloppers in a single sentence, I donāt know what does.
EDIT: Incorrectly claimed it came from āMy Unethical Strategy to Hit 4000 Hours Watch Time in 40 Daysā - fixed that now.
Erratum: That cursed sentence is from Why 99% of YouTubers Fail (And How to Be the 1% That Doesnāt).
Good catch, Iāll quickly update my post now.
itās like this shitty tiktok ai spam that 404media wrote long time ago except thereās no secret discord media ideas channel or grift, theyāre just deceiving themselves (is that decentralization?)
canāt wait for 3h long video dissecting every last bit of it that will get released in a year from now
why any of these spammers think that anyone should spend watching their videos more time than it took them to make it? perun makes 1 video, 1h long per week and itās like half time job for him
-
Excerpt from the new Bender / Hanna book, AI Hype Is the Product and Everyoneās Buying It :
OpenAI alums cofounded Anthropic, a company solely focused on creating generative AI tools, and received $580 million in an investment round led by crypto-scammer Sam Bankman-Fried.
Just wondering, but what ever happened to those shares of Anthropic that SBF bought? Was it part of FTX (and the bankruptcy), or did he buy it himself and still holds them in prison? Or have they just been diluted to zero at this point anyway?
EDIT:
Found it; It was owned by FTX and part of the estate bankruptcy; 2/3 went to Abu Dhabi + Jane Street1, and the remainder went at $30 / share to a bunch of VC2.
Me when I read about how GPT-5 is going:
That horse should be the mascot of this instance
Can anyone explain to me why tf do promptfondlers hate GPT5 in non-crazy terms? Actually I have a whole list of questions related to this, I feel like I completely lost any connection to this discourse at this point:
- Is GPT5 āworseā in any sensible definition of the word? Iāve long complained that there is no good scientific metric to grade those on but like, it can count 'rās in āstrawberryā so I thought itās supposed to be nominally better?
- Why doesnāt OpenAI simply allow users to use the old model (4o I think?) It sounds like the simplest thing to do.
- Do we know if OpenAI actually changed something? Is the model different in any interesting way?
- Bonus question: what the fuck is wrong with OpenAIās naming scheme? 4, then 4o? And thereās also o4 thatās something else??
I donāt have any real input from prompfondlers, as I donāt think I follow enough of them to get a real feeling of them. I did find it interesting that I saw on bsky just now somebody claim that LLMs hallucinate a lot less and that anti-AI people are not taking that into account, and somebody else posting research showing that hallucinations are now harder to spot. (It made up actual real references to thinks, aka works that really exist, only the thing the LLM references wasnāt in the actual reference). Which was a bit odd to see. (It does make me suspect āit hallucinates lessā is them just working out special exceptions for every popular hallucination we see, and not a structural fixing of the hallucination problem (which I think is prob not solvable)).
Oversummarizing and using non-crazy terms: The āPā in āGPTā stands for āpirated works that we all agree are part of the grand library of human knowledgeā. This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for āreal life hate firstā, which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.
Counting letters in words is something that GPT will always struggle with, due to maths. Itās a good example of why Willisonās ācalculator for wordsā metaphor falls flat.
- Yeah, itās getting worse. Itās clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAIās products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Googleās offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
- I think that theyāve done that? I hear that theyāve added an option to use their GPT-4o product as the underlying reasoning model instead, although I donāt know how that interacts with the rest of the frontend.
- We donāt know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It canāt make models better! It can only force a model to be locked into one particular biased worldview.
- Bonus sneer! OpenAIās founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
Thatās actually more batshit than I thought! Like I thought Sam Altman knew the AGI thing was kind of bullshit and the hesitancy to stick a GPT-5 label on anything was because he was saving it for the next 10x scaling step up (obviously he didnāt even get that far because GPT-5 is just a bunch of models shoved together with a router).
- from what i can tell people who roleplayed bf/gf with the idiot box aka grew parasocial relationship with idiot box did that on 4o, and now they canāt make it work on 5 so they got big mad
- i think itās only if they pay up 200$/mo, previously it was probably available at lower tiers
- yeah they might have found a way to blow money faster somehow https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity ed zitron says also that while some of prompt could be cached previously it looks like it canāt be done now because thereās fresh new thing that chooses model for user, while some of these new models are supposedly even heavier. even that openai intention seemed to be compute savings, because some of that load presumably was to be dealt with using smaller models
-
Even if was noticeably better, Scam Altman hyped up GPT-5 endlessly, promising a PhD in your pocket, and an AGI and warning that he was scared of what he created. Progress has kind of plateaued, so it isnāt even really noticeably better, it scores a bit higher on some benchmarks, and theyāve patched some of the more memeād tests (like counting rs in strawberry⦠except it still canāt count the rās in blueberry, so theyāve probably patched the more obvious flubs with loads of synthetic training data as opposed to inventing some novel technique that actually improves it all around). The other reason the promptfondlers hate it is because, for the addicts using it as a friend/therapist, it got a much drier more professional tone, and for the people trying to use it in actual serious uses, losing all the old models overnight was really disruptive.
-
There are a couple of speculations as to why⦠one is that GPT-5 variants are actually smaller than the previous generation variants and they are really desperate to cut costs so they can start making a profit. Another is that they noticed that there naming scheme was horrible (4o vs o4) and confusing and have overcompensated by trying to cut things down to as few models as possible.
-
Theyāve tried to simplify things by using a routing model that makes the decision for the user as to what model actually handles each user interaction⦠except theyāve screwed that up apparently (Ed Zitron thinks theyāve screwed it up badly enough that GPT-5 is actually less efficient despite their goal of cost saving). Also, even if this technique worked, it would make ChatGPT even more inconsistent, where some minor word choice could make the difference between getting the thinking model or not and that in turn would drastically change the response.
-
Iāve got no rational explanation lol. And now they overcompensated by shoving a bunch of different models under the label GPT-5.
-
- The inability to objectively measure model usability outside of meme benchmarks that made it so easy to hype up models have come back to bite them now that they actually need to prove GPT-5 has the sauce.
- Sam got bullied by reddit into leaving up the old model for a while longer, so its not like its a big lift for them to keep them up. I guess part of it was to prove to investors that they have a sufficiently captive audience that they can push through a massive change like this, but if it gets immediately walked back like this, then I really donāt know what the plan is.
- https://progress.openai.com/?prompt=5 Their marketing team made this comparing models responding to various prompts, afaict GPT-5 more frequently does markdown text formatting, and consumes noticeably more output tokens. Assuming these are desirable traits, this would point at how they want users to pay more. Aside: The page just proves to me that GPT was funniest in 2021 and its been worse ever since.
Iāve often called slop āsignal-shaped noiseā. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative āAIā is good at, making spam thatās hard to detect.
It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue āA Plan for Spamā launched Paul Grahamās notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of āoffā worked for your specific inbox.
Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.
I wonder what PG is saying about gen-āAIā these days? letās check:
āAI is the exact opposite of a solution in search of a problem,ā he wrote on X. āItās the solution to far more problems than its developers even knew existed ⦠AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.ā
He shared no examples, but [ā¦]Who would have thought that A Plan for Spam was, all along, a plan for spam.
It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.
This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.
Signal shaped noise reminds me of a wiener filter.
Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course
Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., āfor each b in blueberryā.
And the languageās main data container is a kind of stack, but to push or pop values, you have to wrap them into āboatsā which have to cross a āriverā, with extra rules for ordering and combination of values.
sickos.jpg
Only dutch/german people can create the very long loops.
E: Iām reminded of the upcoming game called: āPlanetenverteidigungskanonenkommandantā
youād think so, until outsourcing to turkish subcontractor happens
for Ƥ in epƤjƤrjestelmƤllistyttƤmƤttƶmyydellƤƤnsƤkƤƤnkƶhƤn
for e in rindfleischetikettierungsüberwachungsgesetz
@mlen @Soyweiser * für e [ā¦] š¤
for h in LlanfairĀpwllgwyngyllĀgogeryĀchwyrnĀdrobwllĀllanĀtysilioĀgogoĀgoch
Everyone else has to
#appropriate
theirculture
.You can take my bitterballen from my WARM FRIENDLY HANDS! No really try some, there are also vegetarian variants.
(They are part of the Inventaris Immaterieel Cultureel Erfgoed Nederland (Inventory of Intangible Cultural Heritage of the Netherlands), only 2 loops though)
Is it a loop if it only executes once?
Time for some Set Theory!
image contents
will arnett from arrested development asking ābees?!ā
Beads.
⦠but the output is not deterministic as the letter count is sampled from a distribution of possible letter counts for a given word and letter pair; count ~ p(count | word = āblueberryā, letter = ābā)!
Even bigger picture⦠some standardized way of regularly handling possible combinations of letters and numbers that you could use across multiple languages. Like it handles them as expressions?
Ozy Brennan tries to explain why ārationalismā spawns so many cults.
One of the reasons they give is āa dangerous sense of grandiosityā.
the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts.
Yep, you heard it right. Shitposting and inconsequential code are the proper way to save the world.
Overall more interesting than I expected. On the Leverage Research cult:
Routine tasks, such as deciding whose turn it was to pick up the groceries, required working around other peopleās beliefs in demons, magic, and other paranormal phenomena. Eventually these beliefs collided with preexisting social conflict, and Leverage broke apart into factions that fought with each other internally through occult rituals.
They sure donāt make rationalism like they used to.
JFC
Agency and taking ideas seriously arenāt bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didnāt work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.
First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.
Second, rats donāt take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.
deleted by creator
Thatās how I remember it too. Also the context about conserving N95 masks always feels like it gets lost. Like, predictably so and I think thereās definitely room to criticize the CDCās messaging and handling there, but the actual facts here arenāt as absurd as the current fight would imply. The argument was:
- With the small droplet size, most basic fabric masks offer very limited protection, if any.
- The masks that are effective, like N95 masks, are only available in very limited quantities.
- If everyone panic-buys N95 the way they did toilet paper it will mean that the people who are least able to avoid exposure i.e. doctors and medical frontliners are at best going to wildly overpay and at worst wonāt be able to keep supplied.
- Therefore, most people shouldnāt worry about masking at this stage, and focus on other measures like social distancing and staying the fuck home.
I think later research cast some doubt on point 1, but 2-4 are still pretty solid given the circumstances that we (collectively) found ourselves in.
Meanwhile, the right-wing prepper types were breaking out the N95 masks theyād stockpiled for a pandemic
This included Scott ssc btw. Who also claimed that stopping smoking helped against cov. Not that he had any proof (the medical science at the time even falsely (it came out later) claimed smoking helped agains covid). But only the CDC gets judged, not the ingroup.
And other Scott blamed people who sneer for making covid worse. (While at sneerclub we were going, take this seriously and wear a mask).
So annoying Rationalists are trying to spin this into a win for themselves. (They also were not early, their warnings matched the warnings of the WHO, looked into the timelines last time this was talked about).
But doctor, I am L7 twitter manager Pagliacci
oldskool OSI appmanager is oldskool
(ā¦sorry)
Iām gonna need this one explained, boss
(in networking itās common terminology to refer to āLxā by numerical reference, and broadly understood to be in reference to this)
Aaaaa gotcha. Itās probably obvious but in my case I meant L7 manager as in ālevel 7 managerā, a high tier managerial position at twitter, probably. I donāt know what exact tiering system twitter uses but I know other companies might use āLxā to designate a level.
I figured, but I couldnāt just let a terrible pun slip me by!
So⦠apparently Peter Thiel has taken to co-opting fundamentalist Christian terminology to go after Effective Altruism? At least it seems that way from this EA post (warning, I took psychic damage just skimming the lunacy). As far as I can tell, heās merely co-opting the terminology, Thielās blather doesnāt have any connection to any variant of Christian eschatology (whether mainstream or fundamentalist or even obscure wacky fundamentalist), but of course, the majority of the EAs donāt recognize that, or the fact that he is probably targeting them for their (kind of weak to be honest) attempts at getting AI regulated at all, and instead they charitably try to steelman him and figure out if he was a legitimate point. ā¦I wish they could put a tenth of this effort into understanding leftist thought.
Some of the comments are⦠okay actually, at least by EA standards, but there are still plenty of people willing to defend Thiel
One comment notes some confusion:
Iām still confused about the overall shape of what Thiel believes.
Heās concerned about the antichrist opposing Jesus during Armageddon. But afaik standard theology says that Jesus will win for certain. And revelation says the world will be in disarray and moral decay when the Second Coming happens.
If chaos is inevitable and necessary for Jesusā return, why is expanding the pre-apocalyptic era with growth/prosperity so important to him?
Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology⦠possibly to try to turn the Christofascists against EA?
Iām dubious Thiel is actually an ally to anyone worried about permanent dictatorship. He has connections to openly anti-democratic neoreactionaries like Curtis Yarvin, he quotes Nazi lawyer and democracy critic Carl Schmitt on how moments of greatness in politics are when you see your enemy as an enemy, and one of the most famous things he ever said is āI no longer believe that freedom and democracy are compatibleā. Rather I think he is using ātotalitarianā to refer to any situation where the government is less economically libertarian than he would like, or āwokeā ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.
Note this commenter still uses non-confrontational language (āIām dubiousā) even when directly calling Thiel out.
The top comment, though, is just like the main post, extending charitability to complete technofascist insanity. (Warning for psychic damage)
Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc)
I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, thereās a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you donāt hear so much about āFDA delenda estā or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against agingās importance are a little weak, IMO (more on this later) ā in Thielās view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.
As for my personal take on Thielās views ā Iām often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic āvibeā and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.
tl,dr; Thiel now sees the Christofascists as a more durable grifting base than the EAs, and is looking to change lanes while the temporary coalitions of maximalist Trumpism offer him the opportunity.
I repeat my suspicion that Thiel is not any more sober than Musk, heās just getting sloppier about keeping it out of the public eye.
I think a big difference between Thiel and Musk, is that Thiel views himself as an āintellectualā and derives prestige āintellectualismā. I donāt believe for a minute heās genuinely christian, but his wankery about end-of-times eschatology of armageddon = big-left-government, is a a bit too confused to be purely cynical, I think sniffing his own farts feeds his ego.
Of course a man who would promote open doping olympics isnāt sober.
Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology⦠possibly to try to turn the Christofascists against EA?
Yep, the usefulness of EA is over, they are next on the chopping block. Iād imagine a similar thing will happen to redscare/moldbug if they ever speak out against him.
E: And why would a rich guy be against a āwe are trying to convince rich guys to spend their money differentlyā organization. Esp a ālibertarianā āI get to do what I want or elseā one.
It always struck me as hilarious that the EA/LW crowd could ever affect policy in any way. Theyāre cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.
Thereās a mini-freakout over OpenAI deciding to keep GPT-4o active, despite it being more āsycophanticā than GPT-5 (and thus more likely to convince people to do Bad Things) but thereās also the queasy realization that if sycophantic LLMs is what brings in the bucks, nothing is gonna stop LLM companies from offering them. And thereās no way these people can stop it, because theyāve made the deal that LLM companies are gonna be the ones realizing that AI is gonna kill everyone and thatās never gonna happen.
Theyāre cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.
It is kind of sad. They are missing the ideological pieces that would let them carry out activism effectually so instead theyāve gotten used as a free source of crit-hype in the LLM bubble. ā¦except not that sad because they would ignore real AI dangers in favor of their sci-fi scenarios, so I donāt feel too bad for them.
Brian Merchantās article about that lighthaven gathering really struck me.
The men who EAs think will end the earth were in the building with them, and rather than organize to throw them out a window (or even to just make them mildly uncomfortable), the bayes knowers all gormlessly moped around their twee boutique hotel and cried around some whiteboards.
Absolute hellish brainworms
Yeah that article was one of the things I had mind. Itās the peak of centrist liberalism where EAs and lesswrongers can think these people are literally going to cause mankindās extinction (or worse) and they canāt even bring themselves to be rude to them. OTOH, if they actually acted coherently on their nominal doomer beliefs, they would be carrying out terrorism on a far greater scale than the Zizians, so maybe it is for the best they are ideologically incapable of direct action.
The ideological version of Mr Burnsās diseases getting in each otherās way
Ooh, do you have a link to share?
Wild, thank you!
And why would a rich guy be against a āwe are trying to convince rich guys to spend their money differentlyā organization.
Well when they are just passively trying to convince the rich guys, they can use the organization to launder reputation or boost ideologies they are in favor of. When the organization actually tries to get regulations passed, even ineffectually, well, that is a threat to the likes of Thiel.
That was what I meant, I was being a bit sarcastic there.
Thiel is a true believer in Jesus and God. He was raised evangelical. The quirky eschatologist that youāre looking for is RenĆ© Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.
Edit: I wrote this before clicking on the LW post. This is a decent summary of Girardās claims as well as how they influence Thiel. Iām quoting West here in order to sneer at Thiel:
Unfortunately (?), Christian society does not let us sacrifice random scapegoats, so we are trapped in an ever-escalating cycle, with only poor substitutes like ācancelling celebrities on Twitterā to release pressure. Girard doesnāt know what to do about this.
Thiel knows what to do about this. After all, he funded Bollea v. Gawker. Instead of letting journalists cancel celebrities, why not cancel journalists instead? Then thereās no longer any journalists to do any cancellation! Similarly, Thiel is confirmed to be a source of funding for Eric Weinstein and believed to fund Sabine Hossenfelder. Instead of letting scientists cancel religious beliefs, why not cancel scientists instead? By directing money through folks with existing social legitimacy, Thiel applies mimesis: pretend to be legitimate and you can shift what is legitimate.
In this context, Thiel fears the spectre of AGI because it canāt be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open. After all, if AGI is truly to unify humanity, it must unify our moralities and cultures into a single uniformly-acceptable code of conduct. But the only acceptable unification for Thiel is the holistic catholic apostolic one-and-only forever-and-ever church of Jesus, and if AGI is against that then AGI is against Jesus himself.
Is there any more solid evidence of Hossenfelder taking Thielbux, or is this just a guess based on the orbit she moves in: appearing on Michael Shermerās podcast years after the news broke that he was a sex pest, blurbing the new book edited by sex pest Lawrence Krauss, etc.
Thereās no solid evidence. (You can put away the attorney, Mr. Thiel.) Experts in the field, in a recent series of interviews with Dave Farina, generally agree that somebody must be funding Hossenfelder. Right now sheās associated with the Center for Mathematical Philosophy at LMU Munich; her biography there is pretty funny:
Sabineās current research interest focuses on the role of locality and finetuning in theory development. Locality has been widely considered a lost cause in the foundations of quantum mechanics. A basically unexplored way to maintain locality, however, is the idea of superdeterminism, which has more recently also been re-considered under the name ācontextualityā. Superdeterminism is widely believed to be finetuned. One of Sabineās current research topics is to explore whether this belief is justified. The other main avenue she is pursuing is how superdeterminism can be experimentally tested.
For those not in physics: this is crank shit. To the extent that MCMP funds her at all, they are explicitly pursuing superdeterminism, which is unfalsifiable, unverifiable, doesnāt accord with the web of science, and generally fails to be a serious line of inquiry. Now, does MCMP have enough cash to pay her to make Youtube videos and go on podcasts? We donāt know. So itās hard to say whether she has funding beyond that.
Oh, wow, that biography is hilariously bad. Contexuality is not the same thing as superdeterminism. And locality is not āa lost causeā. Plenty of people throw around the term quantum nonlocality, but in the smaller population of those who take foundations seriously, many will say that quantum mechanics is local. Most but not all proponents of Copenhagen-ish interpretations say something like, āThe moral of Bellās theorem is that nature needs a non-(local hidden variable) theory. We keep locality and drop the hidden variables. In other words, quantum physics is a local non-(hidden variable) theory.ā The Everettians of various flavors also tend to hold onto locality, or try to, while not always agreeing with each other on how to do that. Itās probably only among the Bohmians that youāll find people insisting that quantum physics means nature is intrinsically nonlocal.
The quirky eschatologist that youāre looking for is RenĆ© Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.
Thanks for the references. The quirky theology was so outside the range of even the weirder Fundamentalist Christian stuff I didnāt recognize it as such. (And didnāt trust the EA summary because they try so hard to charitably make sense of Thiel).
In this context, Thiel fears the spectre of AGI because it canāt be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open.
Except the EAs are, on net, opposed to the creation of AGI (albeit they are ineffectual in their opposition). So going after the EAs doesnāt make sense if Thiel is genuinely opposed to inventing AGI faster. So I still think Thiel is just going after the EAās because heās libertarian and EA has shifted in the direction of trying to get more government regulation. (As opposed to a coherent theological goal beyond libertarianism). Iāll check out the BtB podcast and see if it changes my mind as to his exact flavor of insanity.
Thiel is a true believer in Jesus and God. He was raised evangelical.
Being gay must really complicate things for him.
Using the term āAntichristā as a shorthand for āglobal stable totalitarianismā is A Choice.
I think Leathery Pete might have read too much Left Behind.
Thomasaurus has given their thoughts on using AI, in a journal entry called āI tried coding with AI, I became lazy and stupid)ā. Unsurprisingly, the whole thing is one long sneer, with a damning indictment of its effectiveness at the end:
If I lose my job due to AI, it will be because I used it so much it made me lazy and stupid to the point where another human has to replace me and I become unemployable.
I shouldnāt invest time in AI. I should invest more time studying new things that interest me. Thatās probably the only way to keep doing this job and, you know, be safe.
New article from the New York Times reporting on an influx of compsci graduates struggling to find jobs (ostensibly caused by AI automation). Found a real money shot about a quarter of the way through:
Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent.
You want my take, I expect this articleās gonna blow a major hole in STEMās public image - being a path to a high-paying job was one of STEMās major selling points (especially compared to the āuselessā art/humanities degrees), and this new article not only undermines that selling point, but argues for flipping it on its head.
Quick update: Iāve checked the response on Bluesky, and it seems the general response is of schadenfreude at STEMās expense. From the replies, Iāve found:
-
Humanities graduates directly mocking STEM (Fig. 1,Fig. 2, Fig. 3, Fig. 4, Fig. 5)
-
Mockery of the long-running ālearn to codeā mantra (Fig. 1, Fig. 2, Fig. 3, Fig. 4 Fig. 5, Fig. 6)
-
Claims that STEM automated themselves out of a job by creating AI (Fig. 1, Fig. 2, Fig. 3)
Plus one user mocking STEM in general as ā[choosing] fascism and ābillions must dieāā out of greed, and another approving of othersā dunks on STEM over past degree-related grievances.
You want my take on this dunkfest, this suggests STEMās been hit with a double-whammy here - not only has STEM lost the status their āhigh-payingā reputation gave them, but that reputation (plus a lotta built-up grievances from mockery of the humanities) has crippled STEMās ability to garner sympathy for their current predicament.
I hate the fact that now someone might look at me and surmise that I do something related to blockchain or AI, I feel almost like I need a sticker, like those āI bought it before we knew Elon was crazyā they put on Teslas
āI learnt to code before this stupid bubbleā
stolen from cohost but i appreciate the succinctness of ācapitalism make computer badā
On one hand, this is another case of capitalism working as intended. You have the ruling class dangling the carrot of the promise of social mobility via job. Just gotta turn the crank of the orphan grinder for 4 years or so, until thereās enough orphan paste to grease the next grinding machine. But itās ok, because your experience in crank will let you climb the ladder to the next, higher paying, higher prestige crank of the machine. Then one day, they decide to turn on the motor.
On the other hand? There is no other hand, they chopped it off because you didnāt turn the crank fast enough when you had the chance.
To extend that analogy a bit, the dunkfest I noted suggests that a portion of the public views STEM as perfectly okay with the orphan grinderās existence at best, and proud of having orphan blood on their hands at worst.
As for the motorised orphan grinder you mention, it looks to me like the public viewed its construction as STEM voting for the Leopards Eating Peopleās Faces Party (with predictable consequences).
The whole joining of the fascist side by a lot of the higher ups of the tech world, combined with the long-standing debate bro both sides free speech libertarianism (but mostly for neonazis, payment services do go after sex work and lgbt content) also did not help the rep of STEM, even if those decisions are made by STEM curious people and not actually STEM people. billionaires want you to know they could have done physics - Angela Collier
Youāre dead right on that.
Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.
you say STEM, but you seem to mean almost exclusively computer touchers, already mentioned biologists or variety of engineers wonāt likely have these problems (iām not gonna be excessively smug about this because my field will destroy you physically while still being STEM and not particularly glorious)
also itās not a complete jobocalypse, thereās still 93% employed fresh CS grads, they might have comparatively shittier jobs, but itās not a disaster (unless picture is actually much bleaker in that that unemployment is, say, concentrated in last 2 years of graduates, but still even in this case itās maybe 10%, 12% tops for the worst affected). unless you mean their unlimited libertarian flavoured greed coming through it, then yeah, itās pretty funny
even then, thereās gonna be a funny rebound when these all genai companies implode, partially maybe not in top earner countries, but places like eastern europe or india will fill that openai-sized crater pretty handily, if that mythical outsourcing to ai happened in the first place, that is
As an aging dev, we kind of do deserve some of this flak lol. Funny thing is, I went into SD because my first STEM degree made me as unemployable as a humanities major (a B.S. in physics is good for not much).
-
Well, whatās next, and how much work is it? I didnāt want to be a computing professional. I trained as a jazz pianist. At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.
Well, whatās next, and how much work is it?
Iām not particularly sure myself. By my guess, I donāt expect one specific profession to be āwhatās nextā, but a wide variety of professions becoming highly lucrative, primarily those which can exploit the fallout of the AI bubble to their benefit. Giving some predictions:
-
Therapists and psychiatrists should find plenty of demand, as mental health crisis and cases of AI psychosis provide them a steady stream of clients.
-
Those in writing related jobs (e.g. copywriters) can likely squeeze hefty premiums from clients with AI-written work that needs fixing.
-
Programmers may find themselves a job tearing down the mountains of technical debt introduced by vibe-coding, and can probably crowbar a premium out of desperate clients as well. (This oneās probably gonna be limited to senior coders, though - juniors are likely getting the shaft on this front)
As for which degrees will come into high demand, I expect it will be mainly humanities degrees that benefit - either directly through netting you a profession that can exploit the AI fallout, or indirectly through showing you have skills that an LLM canāt imitate.
I didnāt want to be a computing professional. I trained as a jazz pianist
Nice. You could probably earn some cash doing that on the side.
At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.
Youāre goddamn right.
-
Except biology isnāt being hit as badly and thatās also STEM. I wouldnāt be surprised if other life sciences have also done better, at least until Trump started fucking with the grant system.
Itās specifically computer-touchers who are in the toilet.
Itāll be interesting to see if that holds up, with all the cuts to research funding taking place.
Yall ready for another round of LessWrong edit wars on Wikipedia? This time with a wider list of topics!
On the very slightly merciful upside⦠the lesswronger recommends āIf you want to work on a new page, discuss with the community first by going to the talk page of a related topic or meta-page.ā and āIn general, you shouldnāt post before you understand Wikipedia rules, norms, and guidelines.ā so they are ahead of the previous calls made on Lesswrong for Wikipedia edit-wars.
On the downside, theyāve got a laundry list of lesswrong jargon they want Wikipedia articles for. Even one of the lesswrongers responding to them points out these terms are a bit on the under-defined side:
Speaking as a self-identified agent foundations researcher, I donāt think agent foundations can be said to exist yet. Itās more of an aspiration than a field. If someone wrote a wikipedia page for it, it would just be that personās opinion on what agent foundations should look like.
PS: We also think that there existing a wiki page for the field that one is working in increases oneās credibility to outsiders - i.e. if you tell someone that youāre working in AI Control, and the only pages linked are from LessWrong and Arxiv, this might not be a good look.
Aha so OP is just hoping no one will bother reading the sources listed on the articleā¦
Looking to exploit citogenesis for political gain.
I could imagine a lesswronger being delusional/optimistic enough to assume their lesswrong jargon concepts have more academic citations than a handful of arXiv preprints⦠but in this case they just admitted otherwise their only sources are lesswrong and arXiv. Also, if they know wikipediaās policies, they should no the No Original Research rule would block their idea even overlooking single source and conflict of interest.
From the comments:
On the contrary, I think that almost all people and institutions that donāt currently have a Wikipedia article should not want one.
Huh. How oddly sensible.
An extreme (and close-to-home) example is documented in TracingWoodgrainsās exposĆ©.of David Gerardās Wikipedia smear campaign against LessWrong and related topics.
Ah, never mind.
I finally steeled myself to look at the page history. After dgerard commented about it, someone else tagged the article for additional problems:
-
This article contains wording that promotes the subject in a subjective manner without imparting real information. (August 2025)
-
This article may be too technical for most readers to understand. (August 2025)
Then a third editor added a section ⦠made of LLM bullshit.
Iād probably be exaggerating if I said that every time I looked under the hood of Wikipedia, it reaffirmed how I donāt have the temperament to edit there. But I wouldnāt be exaggerating by much. Itās enough of a hassle to agree upon text in a paper co-authored with a colleague I know personally and like. Dealing with posers whose ego pays them by the word⦠Ugh.
Iād probably be exaggerating if I said that every time I looked under the hood of Wikipedia, it reaffirmed how I donāt have the temperament to edit there.
The lesswrongers hate dgeradās Wikipedia work because they perceive it as calling them out, but if anything Wikipediaās norms makes his ācall outsā downright gentle and routine.
-
Weāre at the point of 100xers giving themselves broken sleep schedules so they can spend tokens optimally.
Inevitably, Anthropic will increase their subscription costs or further restrict usage limits. It feels like theyāre giving compute away for free at this point. So when the investor bux start to run dry, I will be ready.
This has to be satire, but oh my god.
Iām sorry in advance for posting this meme.
My velocity has increased 10x and Iām shipping features like a cracked ninja now, which is great because my B2B SaaS is still in stealth mode.
Yeah itās satire, but effective satire means you can never really tellā¦
Im old enough to recall the polyphasic sleep fad. And how that wrecked people if they ever messed up. (Iirc also turns out very bad implications for long term health).
Tante fires off about web search:
There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal.
And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.
On a personal sidenote, part of me suspects webrings and web directories will see a boost in popularity in the coming years - with web search in the shitter and AI crawlers being a major threat, theyāre likely your safest and most reliable method of bringing human traffic to your personal site/blog.
Mastodon post linking to the least shocking Ars lede I have seen in a bit. Apparently āreasoningā and āchain of thoughtā functionality might have been entirely marketing fluff? :shocked pikachu:
Wait, but if they lied about that⦠what else do they lie about?
Thank you Dan Brown for working hard on poisoning LLMs.
(Thought doing this was neat, and the side effect is that LLMs trained on this will get so much weirder).