Talking to a hallucination is, in fact, not good for you.
Who knew that “simulating” human conversations based on extruded text strings that have no basis in grounded reality or fact could send people into spirals of delusion?
Are companies who force employees to use LLMs going to be liable for the mental health issues they produce?
Should they be? Absolutely. Will they be? lol
Talking to AI Chatbots is about as useful as talking to walls, only that we decided to have those walls talk back to us.
And they aren’t saying anything insightful or useful.
Good small talk tutorial. Terrible everything else
By all accounts, it is still a tool. But knowing society, they want the shortcuts to everything. Like using AI as a therapist. That’s a huge no.
Hey now, my walls are perfect companions, they may be silently judging me but they are always supportive and never sycophantic.
Don’t forget to check if the wall is load-bearing before relying on it for support.
One recent peer-reviewed case study focused on a 26-year-old woman who was hospitalized twice after she believed ChatGPT was allowing her to talk with her dead brother
I feel like the bar for the turing test is lower than ever… You can’t tell ChatGPT apart from your own relatives??
My cousin lost her young daughter a few years back. At Christmas, she had used AI to put her daughter in her Christmas photo. I didn’t have words, because it made her so happy, and I can’t fathom her grief, but man. Felt pretty fucked.
I feel you. I can’t deny the comfort it brought her, but I also can’t help but feel like it is training her to reject her grief.
Not that I’m in any position to pass judgement. I just hope it doesn’t lead to anything more severe.
So the developing psychosis could be causing the AI use?
That’s what the article says, yes:
“The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Sakata told the WSJ.
Thing that tells you exactly what you want to hear causes delusions?
Whaaat?
I completely understand why articles like this need to exist. Information about what ‘AI’ actually is needs to be spread. That being said, I also can’t remove myself from the impression that this is just incredibly obvious. Like one of those studies about whether a dog actually loves their owner by going to lengths such as an MRI of their brain while looking at their owner.
Like, thank you mystery researcher on the internet — but you could have saved the helium by just sticking to Occam’s Razor.
“Doctors say”!
One could call it… Cyberpsychosis?
I’d say know your tools. People misusing “stuff” and being vulnerable to it in general is nothing new. Yet, in a lot of cases, we rely on independence and maturity in the decisions people make. This is no different to LLMs. However, of course meaningful (technological) safeguards should be implemented wherever possible.
By their own nature, there is no way to implement robust safeguards in a LLM. The technology is toxic and the best that could happen is anything else, hopefully not based on brute forcing the production of a stream of tokens, is developer and makes obvious LLMs are a false path, a road that should not be taken.
If AI is that dangerous, it should need a licence to use, same as a gun or car or heavy machinery.
You increase the sample size, you increase the number of hits. Proportionally AI is still just as safe. What a bullshit opinion piece. Inconsequential just like the fucks agreeing with this shit take.
You increase the sample size, you increase the number of hits.
Do you think statisticians aren’t well aware of this?
I am a fucking statistician. And you need a fucking control group to establish causality.
Gtfo if you don’t understand this basic principle.
The article and your argument are both entirely devoid of substance.
If the statisticians involved in this case study are anywhere close to as unhinged as you are then it’s no wonder they got those results lol
Homie been smokin’ them data science rocks, it seems.
Literally made an account on this instance just to let them know I think they’re fucking dense, but I decided they’re not even worth interacting with personally.
Huh? The whole point of this emerging scientific debate is that AI use might be proportionally unsafe, i. e. it might be a risk factor causing and/or exacerbating psychosis. Now sure this is still just a hypothesis and it’s too early to make definite epidemiological statements, but it’s just as wrong to blankly state that AI is “still just as safe”.
“just as safe” is a relational, not absolutist, statement. I’m saying AI is at X level of safety, and more cases emerging does not imply an increasing risk of psychosis. That risk is where it’s always been.
You’re twisting my words because you’re likely one of those brain-dead AI haters.
I don’t particularly love or hate AI, the difference is I look at it critically instead of emotionally. If the population at large had the same X propensity for psychosis as the rate seen with AI usage, that just means it’s correlation without causation.
Alright, but the point is that the “X level of safety” AI is at might be a dangerous level in the first place. I don’t think anybody is arguing that AI got more dangerous as a psychosis risk factor over the past year or so, they’re arguing that AI was a risk factor to begin with, and with increased AI use more evidence of this turns up. So you saying that the inherent risk of AI hasn’t changed is kind of a moot point because that’s not what the debate is about.
Also notice that I clearly said it’s too early to tell one way or the other, so there’s no reason to malign me as uncritical.
You ignored my last paragraph. Yes it’s too early to tell, hence the opinion piece saying “Almost Certainly Linked To” is a distortion of reality. It’s laughably biased, and inductive in less-critical readers.
I can agree with that. (As an aside, I think scientific findings are almost always exaggerated like this in popular journalism.)
I’d say the long and short of it is that we simply don’t (and can’t) know yet. But I think more research on possible links between AI and psychotic delusions is definitely useful, because I find the idea of a connection plausible.
How do LLM interactions compare to… Kinder eggs or lawn darts in terms of safety?
Kinder eggs are incredibly safe. Lawn darts … less so.
I don’t particularly love or hate AI …
Says the person calling people “fucks agreeing with this shit take”, and “brain-dead AI haters” and “less-critical readers” and just in this thread alone. Who knows what else I’d find in looking in your full posting history.
Not a very convincing act, even for a clank-fucker.
Yes. Because taking a side is a shit take. Defending an article taking a side is a shit take.
Whatever sort of “argument” you think you have by cherry-picking is a shit take.










