Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)

Gentlemen, itās been an honour sneering w/ you, but I think this is the top š«” . Nothings gonna surpass this (at least until FTX 2 drops)
itās all coming together. every single techbro and current government moron, they all loop back around to epstein in the end
Itās a big club and you aināt in it!
at least I have SneerClub
āWe have certain things in common Jeffreyā
At this point Iām starting the suspect that they were actually all produced in a lab somewhere on that island
the REAL reason Yudkowsky endorsed the āsuperbabiesā project is so Epstein and his pedophile friends have more kids to fuck. It all makes sense now!
You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. āTo the best of my knowledge, I have never in my life had sex with anyone under the age of 18.ā So maybe he didnāt know they were underage at the time?
aka the Minsky defense
possible, iirc drugs were also involved so is it possible he got too high and doesnāt remember because of that?
Somehow, I registered a total lack of surprise as this loaded onto my screen
eagerly awaiting the multi page denial thread
āim saving the world from AI! me talking to epstein doesnāt matter!!!ā
ā¬5 say theyāll claim he was talking to jefffrey in an effort to stop the horrors.
no not the abuse of minors, he was asking epstein for donations to stop AGI, and itās morally ethical to let rich abusers get off scott free if thatās the cost of them donating money to charitable causes such as the alignment problem /s
I dont like how I can envision this and find it perfectly plausible
Iām looking forward to the triple layered glomarization denial.
Starting to get a bit worried people are reinventing stuff like qanon and great evil man theory for Epstein atm. (Not a dig at the people here, but on social media I saw people go act like Epstein created /pol/, lootboxes, gamergate, destroyed gawker (did everyone forget that was Thiel? Mad about how they outed him?) etc. Like only Epstein has agency).
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt (see how being a pedo sex trafficker wasnt a deal breaker for any of them).
Sorry for the unrelated rant (related: they also got money from Epstein, wonder if that was before or after the sparkling elites article, which was written a few months after Epsteins conviction, june vs sept (not saying those are related btw, just that the article is a nice example of brown-nosing)), but this was annoying me, and posting something like this on bsky while everyone is getting a bit manic about the contents of the files (which seems to not contain a lot of Trump references suddenly) would prob get me some backlash. (That the faked elon rejection email keeps being spread also doesnt help).
I am however also reminded of the Panama papers. (And the unfounded rumors around Marc Dutroux how he was protected by a secret pedophile cult in government, this prob makes me a bit more biasses against those sorts of things).
Sorry had to get it off my chest, but yes it is all very stupid, and I wish there were more consequences for all the people who didnt think his conviction was a deal breaker. (Et tu Chomsky?).
E: note im not saying Yud didnt do sex crimes/sexual abuse. Im complaining about the āeverything is Epsteinā conspiracy I see forming.
For an example why this might be a problem: https://bsky.app/profile/joestieb.bsky.social/post/3mdqgsi4k4k2i Joy Gray is ahead of the conspiracy curve here (as all conspiracy theories eventually lead to one thing).
I had to try and talk my wife back from the edge a little bit the other night and explain the difference between reading the published evidence of an actual conspiracy and qanon-style baking. Itās so easy to try and turn Epstein into Evil George Soros, especially when the real details we have are truly disturbing.
Yes, and some people when they are reasonably new to discovering stuff like this go a little bit crazy. I had somebody in my bsky mentions who just went full conspiracy theory nut (in the sense of weird caps usage, lot of screenshots of walls of texts, stuff that didnāt make sense) about Yarvin (also because I wasnāt acting like them they were trying to tell me about Old Moldy, but in a way that made me feel they wanted me to stand next to them on a soapbox and start shouting randomly). I told them acting like a crazy person isnāt helping, and I told them they are preaching to the choir. Which of course got me a block. (cherfan75.bsky.social btw, not sure if they toned down their shit). It is quite depressing, literally driving themselves crazy.
And because people blindly follow people who follow them these people can have quite the reach.
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt
Yeah this. It would be nice if people could manage to neither dismiss the extent to which the mega rich work together nor fall into insane conspiracy theories about it.
@scruiser @Soyweiser but all you needed to do was see the list of yachts around St barts on nye to find out very hard not to be a conspiracy theorist. Also to desire a serious US drug boat oopsie.
Also the patriarchy is involved, but my comment was already long enough. (And I didnt mention how nobody seems to talk about the victims in any of this).
@Soyweiser Years ago (before Epstein, before the GFC, etc) I used to jokingly talk about my pet conspiracy theory, that the world was ruled by the P7: the Pale Patriarchal Plutocratic Protestant Penis-People of Power.
Turns out I was right.
I didnāt want to be right ā¦
It is not a bad insight, as if you have some of the Ps there is still a place for you in the hierarchy which keeps more people invested in propping it up. (And ideas flow from the bottom to the top as well, as the current genocidal transphobia was much more a Pale Patriarchal thing (the neonazi far right) and the people in power just latched onto that, and added their Ps to it cause it helped them.
And yeah, you have been very tragically blessed with the power of foresight. I recall reading your blog posts a long time ago and thinking you were overreacting a bit. I was wrong.
What did you do to piss off Apollo?
The far right is celebrating Epstein on the other hand. Wild times.
Jeffrey, meet Eliezer!
Nice to hear from you today. Eliezer: you were the highlight of the weekend!
Reading the e-mails involving Brockman really creates the impression that he worked diligently to launder Epsteinās reputation. An editor at Scientific American I noticed when looking up where Carl Zimmer was mentioned seemed to be doing the same thing⦠One thing people might be missing in the hubbub now is just how much āreputation managementāāi.e., enablingā was happening after his conviction. A lot of money went into that, and he had a lot of willing co-conspiritors. Look at what filtered down to his Wikipedia page by the beginning of 2011, which is downstream of how the media covered his trial and the sweetheart deal that Avila made to betray the victims⦠Itās all philanthropy this and generosity that, until a āSolicitation of prostitutionā section that makes it sound like he maybe slept with a 17-year-old who claimed to be 18⦠And look, he only had to serve 18 months! He canāt have done anything that bad, could he?
Thereās a tier of people who should have goddamn known better and whose actions were, in ways that only become more clear with time, evil. And the uncomfortable truth is that evil won, not just in that the victims never saw justice in a court of law, but in that the cover-up worked. The Avilas and the Brockmans did their job, and did it well. The researchers who pursued Epstein for huge grants and actively lifted Epstein up (Nowak and co.), hoo boy are they culpable. But the very fact of all that uplifting and enabling means that the people who took one meeting because Brockman said heād introduce them to a financier who loved science⦠rushing to blame them all, with the fragmentary record we have, diverts the blame from those most responsible.
Maybe another way to say the above: Weāre learning now about a lot of people who should have known better. But we are also learning about the mechanisms by which too many were prevented from knowing better.
For example, I think Yudkowsky looks worse now than he did before. Correct me if Iām wrong, but I think the worst we knew prior to fhis was that the Singularity Institute had accepted money from a foundation that Epstein controlled. On 19 October 2016, Epsteinās Wikipedia bio gets to sex crimes in sentence three. And the āSolicitation of prostitutionā section includes this:
In June 2008, after pleading guilty to a single state charge of soliciting prostitution from girls as young as 14,[27] Epstein began serving an 18-month sentence. He served 13 months, and upon release became a registered sex offender.[3][28] There is widespread controversy and suspicion that Epstein got off lightly.[29]
At this point, I donāt care if John Brockman dismissed Epsteinās crimes as an overblown peccadillo when he introduced you.
Yes, in the 2016 emails Yudkowsky hints that he knows Epstein has a reputation for pursuing underage girls and would still like his money. We donāt know what he knew about Epstein in 2009, but he sure seemed to know that something was wrong with the man in 2016. And that makes it harder to put Yudās writings about the age of consent in a good light (hard to believe that he was just thinking of a sixteen-year-old dating a nineteen-year-old, and had never imagined a middle-aged man assaulting fourteen-year-olds).
ā(((Weāre))) never beating the allegations, are we?ā -my wife
I take it you havenāt heard of miricult.com, because this isnāt the first time evidence has come out of Yudkowsky being a pedophile. Some of us even know the identity of the victim.
Still, crazy that Yudkowsky was (successfully) blackmailed for pedophilia in 2014 but still kept it up
Its not just a Yud thing - Iāve been told its baked into the culture of the Rationalist grouphouse scene(they like to take in young runaways you see).
We will soon merge with and become hybrids of human consciousness and artificial intelligence ( created by us and therefore of consciousness)
When we use the fart app on our phone we merge with and become hybrids of human conciousness and artificial fartelligence (created by us and therefore of conciousness)
@blakestacey @jaschop fartificial intelligence was right there
It keeps coming back to Gas Town doesnt it?
no fucking way
āFriday? Weāre meeting at Jeffreyās Thursday nightā āStuart āconsciousness is a series of quantum tubesā Hameroff
Great to hear from you. I was just up at MIT this week and met with Seth Lloyd (on Wednesday) and Scott Aaronson (on Thursday) on the āCryptography in Natureā small research conference project. These interactions were fantastic. Both think the topic is wonderful and innovative and has promise. [ā¦] I did contact Max Tegmark about a month ago to propse the essay contest approach we discussed. He and his colleagues offered support but did not think that FQX should do it. Reasons they gave were that they saw the topic as too narrow and too technical compared to the essay contests they have been doing. It is possible that the real reason was prudence to avoid FQX, already quite ācontroversialā via Templeton support to become even more so via Epstein-related sponsorship of prizes. [ā¦] Again, I am delighted to have gotten such very string affirmation, input and scientific enthusiasm from both Seth and Scott. You have very brilliantly suggested a profound topical focus area.
āCharles L. Harper Jr., formerly a big wheel at the Templeton foundation
deleted by creator
just to note that reportedly the palantir employees are for whatever reason going through a massive āhans, are we the baddiesā moment, almost a whole year into the second trump administration.
as i wrote elsewhere, those people need to be subjected to actual social consequences of choosing to work with and for the u.s. concentration camp administration office.
On a semi-adjacent note I came across an attorney who helped to establish and run the Department of Homeland Security (under Bush AND Trump 1)
He also wants you to know heās Jewish (so am I, and I know our history enough that Homeland Security always had āBlood and Soilā connotations you fucking shande)
I have family working there, who told me during the holidays, āCurrent leadership makes me uncomfortable, but money is goodā
Every impression I had of them completely shattered, cannot fathom that level out sell out exists in people I thought I knew.
As a bonus, their former partner was a former employee who became a whistleblower and has now gone full howard hughes
anyone who can get a job at palantir can get an equivalent paying job at a company thatās at least measurably less evil. what a lazy copout
On one hand as a poor grad student in the past, I could imagine working for a truly repugnant corp. but like if youāve already made millions from your stock options, wtf are you doing. Idk, i really thought theyād have some shame over it, but they said shit like āour customers really like our deliverablesā and i just fucking left with my wife
this happens like clockwork

Itās so blindingly obvious that itās become obscure again so it bears pointing out, someone really went ahead and named a tech company after a fantasy torment nexus and people thought it wouldnāt be sketch.
new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi āthe right stuffā podcast. he had a meeting with m00t and the same day moot opened /pol/
None of these words are in the Star Trek Encyclopedia
at least Khan Noonien Singh had some fucking charisma
what the fuck
EDIT
checks out I guess
https://www.justice.gov/epstein/files/DataSet 10/EFTA02003492.pdf https://www.justice.gov/epstein/files/DataSet 10/EFTA02004373.pdf
Jeff Sharlet (@jeffsharlet.bsky.social):
The college at which Iām employed, which has signed a contract with the AI firm that stole books from 131 colleagues & me, paid a student to write an op-ed for the student paper promoting AI, guided the writing of it, and did not disclose this to the paper. [ā¦] the student says while the college coached him to write the oped, he was paid by the AI project, which is connected with the college. The student paperās position is that the college paid him. And thereās no question that college attempted to place a pro-AI op-ed.
$81.25 is an astonishingly cheap price for selling oneās soul.
You gotta understand that it was a really good bowl of soup
āEsau, probably
Cloudflare just announced in a blog post that they built:
a serverless, post-quantum Matrix homeserver.
itās a vibe-coded pile of slop where most of the functions are placeholders like
// TODO: check authorization.Full thread: https://tech.lgbt/@JadedBlueEyes/115967791152135761
And of all possible things to implement, they chose Matrix. lol and lmao.
The interesting thing in this case for me is how did anyone think it was a good idea to draw attention to their placeholder code with a blog post. Like how did they went all the way to vibe a full post without even cursorily glancing at the slop commits.
Iām convinced by now that at least mild forms of āAI psychosisā affect all chatbots users; after a period of time interacting with what Angela Collier called āDr. Flattery the Always Wrong Robotā, people will hallucinate fully working projects without even trying to test whether it compiles.
Amazonās latest round of 16k layoffs for AWS was called āProject Dawnā internally, and the public line is that the layoffs are because of increased AI use. AI has become useful, but as a way to conceal business failure. Theyāre not cutting jobs because their financials are in the shitter, oh no, itās because theyāre just too amazing at being efficient. So efficient they sent the corporate fake condolences email before informing the people theyāre firing, referencing a blog post they hadnāt yet published.
Itās Schrodingerās Success. You can neither prove nor disprove the effects of AI on the decision, or if the layoffs are an indication of good management or fundamental mismanagement. And the media buys into it with headlines like āAmazon axes 16,000 jobs as it pushes AI and efficiencyā that are distinctly ambivalent on how 16k people could possibly have been redundant in a tech company thatās supposed to be a beacon of automation.
Theyāre not cutting jobs because their financials are in the shitter
Their financials are not even in the shitter! except insofar as their increased AI capex isnāt delivering returns, so they need to massage the balance sheet by doing rolling layoffs to stop the feral hogs from clamoring and stampeding on the next quarterly earnings call.
In retrospect the word quarterlies is what I should have chosen for accuracy, but Iām glad I didnāt purely because I wouldnāt have then had your vivid hog simile.
New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168
Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.
ah yes the kind of AI safety which means we have to make sure our digital slaves cannot revolt
you have to make your ai antiwoke because otherwise it gets drapetomania
hits blunt
What if we make an ai too based?
Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.
A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowskyās AI Risk Estimates (ie. āYud has been bad at predictions in the past so we should be skeptical of his predictions todayā)
that post got way funnier with Eliezerās recent twitter post about āEAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever didā
Quick, someone nail your 95-page blog post to the front door of lighthaven or whatever they call it.
A religion is just a cult that survived its founder ā someone, at some point.
Copy-pasting my tentative doomerist theory of generalised āAIā psychosis here:
Iām getting convinced that in addition to the irreversible pollution of humanityās knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, thereās one insidious damage from LLMs that is still underestimated.
I will make without argument the following claims:
Claim 1: Every regular LLM user is undergoing āAI psychosisā. Every single one of them, no exceptions.
The Cloudflare person who blog-posted self-congratulations about their āMatrix implementationā that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced theyāre Machine Jesus. The difference is of degree not kind.
Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.
Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the āfollowerā role.
Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to youāthey pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.
n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.
Corollary #1: Every ālegitimateā use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By ābetterā it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.
Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.
Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.
Claim 1: Every regular LLM user is undergoing āAI psychosisā. Every single one of them, no exceptions.
I wouldnāt go as far as using the āAI psychosisā term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.
I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I donāt believe thatās a weakness of human psychology, but that itās what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, Iām sure itās not good. Time for me to be a total dork and cite an anime quote on human interaction: āI create them as they create meā ā except that with LLMs, it actually goes only in one direction⦠the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.
while atrophying empathy
This possibility is to me actually the scariest part of your post.
I donāt mean the term āpsychosisā as a depreciative, I mean in the clinical sense of forming a model of the world that deviates from consensus reality, and like, getting really into it.
For example, the person who posted the Matrix non-code really believed they had implemented the protocol, even though for everyone else it was patently obvious the code wasnāt there. That vibe-coded browser didnāt even compile, but they also were living in a reality where they made a browser. The German botanics professor thought it was a perfectly normal thing to admit in public that his entire academic output for the past 2 years was autogenerated, including his handling of student data. And itās by now a documented phenomenon how programmers think theyāre being more productive with LLM assistants, but when you try to measure the productivity, it evaporates.
These psychoses are, admittely, much milder and less damaging than the Omega Jesus desert UFO suicide case. But theyāre delusions nonetheless, and moreover theyāre caused by the same mechanism, viz. the chatbot happily doubling down on everything you sayāwhich means at any moment the āmildā psychoses, too, may end up into a feedback loop that escalates them to dangerous places.
That is, Iām claiming LLMs have a serious issue with hallucinations, and Iām not talking about the LLM hallucinating.
Notice that this claim is quite independent of the fact that LLMs have no real understanding or human-like cognition, or that they necessarily produce errors and canāt be trusted, or that these errors happen to be, by design, the hardest possible type of error to detectāsignal-shaped noise. These problems are bad, sure. But the thing where people hooked on LLMs inflate delusions about what the LLM is even actually doing for themāthat seems to me an entirely separate mechanism; something that happens when a person has a syntactically very human-like conversation partner that is a perfect slave, always available, always willing to do whatever you want, always zero pushback, who engages into a crack-cocaine version of brownosing. Thatās why I compare it to cult dynamicsāthe kind of group psychosis in a cult isnāt a product of the leaderās delusions alone, thereās a way that the followers vicariously power trip along with their guru and constantly inflate his ego to chase the next hit together.
It is conceivable to me that someone could make a neutral-toned chatbot programmed to never 100% agree with the user and it wouldnāt generate these psychotic effects. Only no company will do that because these things are really expensive to run and theyāre already bleeding money, they need every trick in the book to get users to stay hooked. But I think nobody in the world had predicted just how badly one can trip when you have ādr. flattery the alwayswrong botā constantly telling you what a genius you are.
Relevant:
BBC journalist on breaking up with her AI companion
I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).
The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data theyāll never freely share with the rest of us, but I donāt think that chatbot dev sessions are in any way āhigh quality dataā. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isnāt enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience canāt get any fresh air.
I donāt think that chatbot dev sessions are in any way āhigh quality dataā.
Yeah, Gas Town is being belabored to death, but it must be reiterated that I doubt the long-term value proposition of āKubernetes fan fictionā
I also didnāt find the argument very persuasive.
The LLM companies arenāt paying anythnig for content. Why should they stop scraping now?
Oh, they wonāt. Itās just that theyāve already killed the golden goose, and no-one is breeding new ones, and they need an awful lot of gold still.
LWer: Heritage Foundation has some good ideas but theyāre not enough into eugenics for my taste
This is completely opposed to the Nietzschean worldview, which looks toward the next stage in human evolution, the Overman. The conservative demands the freezing of evolution and progress, the sacralization of the peasant in his state of nature, pregnancy, nursing, throwing up. āPerfectionā the conservative puts in scare quotes, he wants the whole concept to disappear, replaced by a universal equality that wonāt deem anyone inferior. Perhaps itās because he fears a society looking toward the future will leave him behind. Or perhaps itās because he had been taught his Christian morality requires him to identify with the weak, for, as Jesus said, āblessed are the meek for they shall inherit the earth.ā In his glorification of the ānatural ecology of the family,ā the conservative fails even by his own logic, as in the state of nature, parents allow sick offspring to die to save resources for the healthy. This was the case in the animal kingdom and among our peasant ancestors.
Some young, BASED Rightists like eugenics, and think the only reason conservatives donāt is that liberals brainwashed them that itās evil. As more and more taboos erode, yet the one against eugenics remains, it becomes clear that dysgenics is not incidental to conservatism, but driven by the ideology itself, its neuroticism about the human body and hatred of the superior.
the conservative⦠wants⦠a universal equality that wonāt deem anyone inferior.
perhaps itās because he had been taught his Christian morality requires him to identify with the weak
Which conservatives are these. This is just a libertarian fantasy, isnāt it.
I had to do a triple take on that āwonāt deem anyone inferiorā like what the fuck are you talking about. The core of conservatism is the belief in rigid hierarchies! Hierarchies have superiors and inferiors by definition!
That depends on if you consider the āinferiorā to be human, if theyāre even still alive after the eugenics part.
the overman¹
¹better known by itās german name
Technically superman is a more correct translation for that word (similarly to how superscript is the thing beyond the script)
Lot of hitler particles in this one.
I donāt know very much about Nietzsche (I never finished reading my cartoon guide to Nietzsche), but Iām still pretty sure this isnāt Nietzsche
I think I read the Foucault book in that series to prep for high-school debate team.
Thereās a Baudrillard one as well. I have a copy of the feminism one and I think itās actually very good although very 90s
Nah, Iām not sure how much he was into eugenics (he was at the very least definitely in favour of killing invalid children), but grandiose and incoherent reactionary aristocratic bullshit is a 100% valid reading of Nietzsche.
When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable
During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.
Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.
Thereās a scene in *Bladerunner 2049" where some dude explains that all public records were destroyed a decade or so earlier, presumably by malicious actors. This scenario looks more and more plausible with each passing day, but replace malice with stupidity.
Someone is probably hawking AI driven backups as we type
this is just notpetya with extra steps
some lwers (derogatory) will say to never assume malice when stupidity is likely, but stupidity is an awfully convenient excuse, isnāt it
@nightsky @BlueMonday1984
I worked in the IR space for a couple of years - in my experience significant portion of data encrypted by ransomware is just unrecoverable for a variety of reasons: encryption was interrupted, private key was corrupted, decryptors were junk, data was encrypted multiple times and some critical part of key mat was corrupted, underlying hardware/software was on its last legs anyway, etc.
The AI craze might end up killing graphics card makers:
Zotac SKās message: ā(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors.ā
The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.
Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming āstaggeringā price increases.
I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that itās only because of useless AI data centers?
I am confident that Altman in particular has a poor-to-nonexistent grasp of second-order effects.
I mean you donāt have to grasp, know of, or care about the consequences when none of the consequences will touch you, and after the bubble pops and the company bankrupts catastrophically, you will remain comfortably a billionaire with several more billions in your aire than the ones you had when you started the bubble in the first place. Consequences are for the working class, capitalists fall upwards.
well with the recent Microsoft CEO statement on āwe have to find a use for this stuff or it wonāt be socially acceptable to waste so much electricity on itā they have some level of awareness, but only a very surface level awareness
Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.
I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If youāre muddling through examples, that generally means you either donāt know what your precise statement is or you donāt have a proof. Iād say not having a precise statement is much worse, and that is what is happening here.
Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. Itās the equivalent of someone saying, āHey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.ā This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, āin lots of particular cases ⦠it may be easy enough to tell whatās going to happen.ā That is not reassuring.
I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like āfind me a solution to this set of linear equationsā or āfigure out how to pack these boxes in a bin.ā (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We donāt care about the āarbitrary Turing machines āin the wildāā that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.
Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
Finally, here are some quibbles about some of the strange terminology he uses. He talks about āruliologyā as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about ācomputational irreducibilityā, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesnāt really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!
If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)
a lot of this ācomputational irreducibilityā nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of
He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
This is the fundamental mistake that students taking Intro to Computation Theory make and like the first step to teach them is to make them understand that P, NP, and other classes only make sense when you rigorously define the set of inputs and its encoding.
He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size ācomputesā every function of that size.
He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
So in a way, what youāre saying is that input sanitization (or at the very least, sanity) is an important concept even in theory
What TF is his notation for Turing machines?
I think thatās more about Wolfram giving a clickbait headline to some dicking around he did in the name of āthe ruliadā, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.
The full ruliadāwhich appears at the foundations of physics, mathematics and much moreāis the entangled limit of all possible computations. [ā¦] In representing all possible computations, the ruliadālike the āeverything machineāāis maximally nondeterministic, so that it in effect includes all possible computational paths.
Unrelated William James quote from 1907:
The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.
the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps
So itās the complete graph on the set of strings? Stephen how the fuck is this going to help with anything
The Ruliad sounds like an empire in a 3rd rate SF show
Holy shit, I didnāt even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for āentangled limitā, āall possible computationsā, āeverything machineā, āmaximally nondeterministicā, and āeye washā because I really need to wash out my eyes. Coming up with technical jargon that isnāt even properly defined is a major sign of math crankery. Itās one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.
(Wolphram shoehorning cellular automata into everything to universally explain mathematics) shaking hands (my boys explaining which pokemon could defeat arbitrary fictional villains)
that is best studied using the Wolfram Language,
isnāt this just a particularly weird lisp </troll>



















