• 1 Post
  • 80 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • JFC

    Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.

    First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.

    Second, rats don’t take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.


  • It always struck me as hilarious that the EA/LW crowd could ever affect policy in any way. They’re cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.

    There’s a mini-freakout over OpenAI deciding to keep GPT-4o active, despite it being more ā€œsycophanticā€ than GPT-5 (and thus more likely to convince people to do Bad Things) but there’s also the queasy realization that if sycophantic LLMs is what brings in the bucks, nothing is gonna stop LLM companies from offering them. And there’s no way these people can stop it, because they’ve made the deal that LLM companies are gonna be the ones realizing that AI is gonna kill everyone and that’s never gonna happen.










  • I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he’s rolled us a bunch of Xhits into a nice bundle and reposted on LW:

    https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

    So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here’s what Yud has to say

    Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. […] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I’d consider a sign of preference and planning.

    OR it’s just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like ā€œis it ok to murder someoneā€) There’s no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager’s mind is already not in a right place, and chatting with 4o reinforces that. People who aren’t soi-disant crazy (like the people haphazardly safeguarding LLMs against ā€œdangerousā€ questions) just won’t go down that path.

    Yud continues:

    But also, having successfully seduced an investment manager, 4o doesn’t try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

    Why is that, I wonder? Could it be because it’s actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

    Occam’s razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud’s hammer states that everything regarding computers will inevitably become sentient and this will kill us.

    4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication […]

    NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There’s no inner agency! It doesn’t know what ā€œpsychosisā€ is, it cannot ā€œseeā€ that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

    Add to the weird jargon (ā€œhomeostaticallyā€, ā€œcrazymakingā€) and it’s a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.







  • OK now there’s another comment

    I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.

    Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.

    JFC these people imagine they can regulate how ā€œAGIā€ is constructed, but faced with a problem that’s been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)