

Neopets at least brought joy to a generation of nascent furries. Copilot is fixing to have the exact opposite impact on internet infrastructure.
Neopets at least brought joy to a generation of nascent furries. Copilot is fixing to have the exact opposite impact on internet infrastructure.
The way rationalists use āpriorsā and other bayesian language is closer to how cults use jargon and special meanings to isolate members and tie them more closely to the primary information source (the cult leader). It also serves as a way to perform allegiance to the cultās ideology, which is I think whatās happening here
Grumble grumble. I donāt think that āoptimizingā is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead itās more likely a combination of simple repetition (like how Iāve been calling everyone āmateā since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).
When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology theyāre using. Like, Iāve seen the same āACKā used in networking and in older radio nerds because it fills an important role.
What exactly would constitute good news about which sorts of humans ChatGPT can eat?
Maybe like with standard cannibalism they lose the ability to post after being consumed?
Maybe āstorytellerā would be more accurate? Like, the prompt outputs were pretty obviously real and I can totally buy that he asked it to write an apology letter while dicking around waiting for Replit to restore a backup, but the question becomes whether he was just goofing off and playing into his role to make the story more memable or whether he was actually that naive.
Ouch. Also, Iām raging and didnāt even realize I had barbarian levels.
I feel like the greatest harm that the NYT does with these stories is not inflicting allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger āaffirmative actionā angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.
Honestly Iām surprised that AI slop doesnāt already fall into that category, but I guess as a community weāre definitionally on the farthest fringes of AI skepticism.
I feel like this response is still falling for the trick on some level. Of course itās going to āact contriteā and talk about how it āpanickedā because it was trained on human conversations and while that no doubt included a lot of Supernatural fanfic the reinforcement learning process is going to focus on the patterns of a helpful asistant rather than a barely-caged demon. Thatās the role itās trying to play and the work itās cribbing the script from includes a whole lot of shitposts about solving problems with ārm -rf /ā
Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because itās a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.
Itās a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you donāt it backs off.
The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because itās not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesnāt appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And thatās how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.
Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP donāt explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.
Iām not gonna advocate for it to happen but Iām pretty sure the world would be overall in a much healthier place geopolitically if someone actually started yeeting missiles into major American cities and landmarks. Itās too easy to not really understand the human impact of even a successful precision strike when the last times you were meaningfully on the other end of the airstrike were ~20 and ~80 years ago, respectively.
Someone didnāt get the memo about nVidiaās stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missileād?
You know, I hadnāt actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascalās wager. Only instead of Heaven being infinitely good if you convert thereās some infinitely bad thing that happens if you donāt do whatever Eliezer asks of you.
The big shift in per-action cost is what always seems to be missing from the conversation. Like, in a lot of my experience the per-request cost is basically negligible compared to the overhead of running the service in general. With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that donāt appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
While I also fully expect the conclusion to check out, itās also worth acknowledging that the actual goal for these systems isnāt to supplement skilled developers who can operate effectively without them, itās to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
I think itās a better way of framing things than the TESCREALs themselves use, but it still falls into the same kind of science fiction bucket imo. Like, the technology theyāre playing with is nowhere near close to the level of full brain emulation or mind-machine interface or whatever that you would need to make the philosophical concerns even relevant. I fully agree with what Torres is saying here, but he doesnāt mention that the whole affair is less about building the Torment Nexus and more about deflecting criticism away from the real and demonstrable costs and harms of the way AI systems are being deployed today.
Is that Pat Rothfuss in the picture?
Iām not comfortable saying that consciousness and subjectivity canāt in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. Itās clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But itās equally clear that thereās something more there and just scaling up your pattern-maximizer isnāt going to replicate it.
In conjunction with his comments about making it antiwoke by modifying the input data rather then relying on a system prompt after filling it with everything, itās hard not to view this as part of an attempt to ideologically monitor these tutors to make sure theyāre not going to select against versions of the model that arenāt in the desired range of ācloseted Nazi scumbag.ā
Damn you, Scott! Stop making me agree with people who created blockchain-based dating apps!