

Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?


Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?


The classic 40k catch-22: either it doesnāt do what youāre claiming it does, in which case youāre a heretic lying to the inquisition OR it does and youāre summoning the spirits of the dead like a necromancer heretic.


Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely canāt go wrong. I mean they limited it to only so so when the users asked nicely!
Edit to add:
The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI āFOOMā than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systemsā vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldnāt assume they actually expect to build that much), the people behind these systems continue to focus their efforts on āhow do we prevent skynetā over any of it.
Thinking in the context of Charlie Strossā old talk about corporations as āslow AI,ā I wonder if some of the concern comes either explicitly or implicitly from an awareness that ākeep growing and consuming more resources until thereās nothing left for anything else, including human survivalā isnāt actually a deviation from how these organizations are building these systems. Itās just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of āplease donāt murder everyone or ignore the instructions to not murder everyoneā to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesnāt appear likely to happen unless something forces it to.


The grand irony is Iām not even sure most people click on or read this sort of stuff. I donāt think itās often even created to be read by anyone. I think itās created as a sort of swaddling fan fiction for MBAs, advertisers, event sponsors and sources, so they can tune out ethical quibbles and feel good about how clever they are.
Every time someone hypes up Steve Jobsā āreality distortion fieldā this is what theyāre actually talking about whether they realize it or not.


I was sufficiently interested based off of this that I tracked down a few others of his. This one felt like a good take for an era where these things are being used for more than just slop generation despite the underlying flaws not being resolved.


It felt very much like the devilās bargain of online media. Like, you can have your prestige journalism as a treat, but only if the slop flows fast enough that we need clout more than eyeballs.


Ironically I think itās also been discussed most frequently within Rationalist circles that these types of intelligence arenāt often correlated. Iām not going to chase down links right now because doing an SSC archive exploration requires more mental fortitude than I currently possess, but I distinctly remember that a recurring theme was āif nerds are so smart why donāt they rule the world?ā In my less cynical days I had assumed that his confusion on this point was largely rhetorical, intended to illustrate some part of whatever point was buried in the beigeness. Now it seems like I was falling victim to the ability to project whatever tangentially-related thesis you want onto the essay and find supporting arguments because of how badly itās written.


Fuck it, Iām good to Gonch this out.
I forget, did we ever actually learn who the killer was in Murder at Wizard University? I remember it kept coming up through the first book as a kind of motif for how this new world wasnāt necessarily as safe and clean as Tommy expected, but I think that whole business with the Thoughtknot ended up overshadowing it before the actual killer was revealed. Like, I get it thematically or whatever but it just stuck in my head as a loose thread and has bugged me for years.


On a purely rhetorical point, it seems like the whole counterargument from Gwern is just an argument-by-disorganization or something to that effect. He doesnāt actually challenge the factual information presented, but does shift how those facts are framed and what the actual contention is in the background, and then avoids actually engaging with the new contention from the bottom up.
In a lot of discussions with singularity cultists (both pros and antis) they assume that a true superintelligence would render the whole universe deterministically predictable to a sufficient degree to allow it to basically do magic. This is how the specifics of "how and why does the AI kill all humans again?ā tend to be elided, for example. This same kind of thinking is also at the heart of their obsession with āsuperpredictorsā who can, it is assumed, use some kind of trick to beat this kind of mathematical limit in certainty (this is the part where I say something about survivorship bias). In the context of that discussion, the fact that a relatively simple arrangement of components following relatively simple, deterministic rules is still not meaningfully predictable past a dozen or so sequential events due to the magnification of the inevitable error in our understanding of the initial circumstances is a logical knockout.
Rather than engage with this, however, Gwern and his compatriots in the thread focus in on the tangent about how high-level pinball players are able to control for that uncertainty by avoiding the region of the board where those error-magnifying parts are. However this is not the same argument and begs the question of whether those high-chaos areas are always avoidable as they are in a pinball machine. Rather than engage with that question, Gwern doubles down on the pinball analogy, shifting the question even further from āhow well can we predict the deterministic motion of a ball given the inevitable uncertainty of our initial stateā to āhow many ways can we convince a third party weāve gotten a high score on a pinball machineā. At this point weāre not just moving the goalposts, weāve moved the entire stadium into low earth orbit and gotten real cute about whether weāre playing š or ā½ football.
And given the conversation surrounding the thread and these topics on LW Iām not even going to assume that such a wild shift is the result of bad faith instead of simple disorganization and sloppiness of rhetoric. This is what happens to a community that conflates āit makes me feel smartā with āit actually communicates the point effectivelyā.


A) At this point I would be more surprised to learn that AI psychosis wasnāt infecting the upper tiers of the white house tbh. Like, at this point we could get a leak that Hegseth had been developing a literal god complex alongside his LLM mistress and I wouldnāt bat an eye.
B) It seems like a particularly bad sign that this is coming from thr Saudis given that theyāve been a consistent ally that the US has spent a lot of material resources and political capital to support. Ed: not actually an official Saudi government source. When you assume you make an ass of yourself, etc.


I mean it looks kinda swastikesque imo, especially with the ambiguity over whether itās supposed to be one or two "I"s behind it. (In some cases itās FII with the second I split, and sometimes itās FIIInstitute with the top of the second and bottom of the third āIā visible).


I doubt they have the individual or institutional capacity to go after them in a timely and competent fashion, but thereās plenty of time before August for someone to remind them about it, especially since this was a way for Anthropic and friends to reclaim some positive space in the news cycle. I can see some bad news for the bubble and/or war hitting in, say, June and causing Amodei to break out the āwe stood up to trumpā story again, which will in turn remind the dodderer-in-chief that they were gonna try and do something about that guy.


Part of what makes the RatFic version of this so weird imo is that despite being ostensibly rooted in relatively low-hanging fruit (e.g. what if we industrialized this pre modern setting, what if we rationally looked at the rules of this magic system, etc.) nobody other than the protagonist has ever thought about these things and even once the protagonist starts demonstrating some real world-conquering results (benevolently, of course) nobody ever really seems to want to copy their successes. Part of what made the actual industrial revolution unfold the way it did was because of the ensuing arms race of it. In addition to causing the lines on various economistās charts to go nearly vertical this also basically culminated in the first world war, which seems like the kind of event that they should be aware of. But of course in RatFic it seems like anyone who canāt be talked around to joining up with our protagonist is too weak or woke or stupid to actually pose a threat to the Glorious March of Rational Progress.


You know, when I think about securely holding onto things and protecting them without damaging or dropping them, I think of a fucking OPEN CLAW said nobody ever.


I donāt know, I think thereās real value in allowing the public to keep an eye on aspiring supervillains.


I⦠āFireproof steel I-beamsā has to be taking the piss, right? Right???


Requiescat en urina, more like. Does that make the first of this generation of slopmakers to actually get shut down?


Oh sure, but when I send a cover letter where claude code told me about serious security issues and I used that knowledge to replace their internal app portal with my face Iāve āviolated the computer fraud and abuse lawsā or whatever.


I feel like āvaluation goes down when bad things happenā shouldnāt be this surprising and yet here we are.
Donāt they have a version of breakout buried somewhere in Excel? Sounds like an entertainment purpose to me.