Ex Machina was the og
Ex machina is more about the question of machine consciousness & sentience.
Is Her actually an entertaining movie to watch? or is it just like another Oscar bait cerebral slow burn that at the end of it you realized it was pretty boring if it didn’t provoke any thoughts for the viewer?
Omg YES!
But! It’s a movie made before doomscrolling while watching was a thing, so you are expected to pay close attention the whole time and not be on your phone while you watch it
Thank you I appreciate this very much! I’m convinced so I’m gonna watch it this week and I’ll be sure to try to keep my phone out of my hands. It’s more a physical keep my hands busy compulsion than anything haha
Thanks for being vague ;)
Just to add, it’s definitely more character driven than plot driven. I really enjoyed it, but not everyone is big on character driven stories.
Additionally, I think in a post GPT world it’ll hit different, but at the time it brought up interesting concepts that weren’t mainstream yet.
The AI in Her was actually AI - a full person in most respects. That’s not what’s happening now.
AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens). The term “AI” has been misused and marketed so much now it’s lost its previous connection to the actual “Artificial Intelligence” meaning.
AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens)
yeah according to people who also say that idiot plagiarism machines are gonna be machine gods one day, you will all see, and also coincidentally the same people who make them
I inferred in the next sentence that LLMs are not AI. If you want to debate if AGI is even possible, by all means, but I’m not sure you understand the different definitions since you missed my first point.
AI is the subset of math that concerns automated processes. AI has never been AGI, it’s always been “stuff people make up to make the computer seem smart”. Everything from chess computers to elevators to MML are all under - and have always been - included in the term AI.
Implied, not inferred
openai/anthropic crowd/cult (the people that work there, not fans) will with straight face claim that both llms will form agi and that agi can self-improve recursively this way, and about nobody else does that
And yet plenty of other ML experts will say that LLMs can’t be the path to AGI simply by the limitations of how they’re built. Meaning that experts in the field do think AGI could be possible, just not in the way it’s being done with those products. So if you’re ranting against the marketing of LLMs as some Holy Grail that will come alive… again, that was my initial point.
The interesting thing is that you went after my line about AGI>>>ASI, so I’m curious why you think a machine that could do anything a human can do thinking or otherwise would stop there? I’m assuming AGI happens, of course, but once that occurs, why is that the end?
well i don’t assume agi is a thing that can feasibly happen, and the well deserved ai winter will get in the way at any rate
i’ll say more, if you think that it’s remotely possible you’ve fallen for openai propaganda
That you won’t even discuss the hypotheticals or AGI in general indicates you’ve got a closed mind on the subject. I’m totally open to the idea that AGI is impossible, if it can be demonstrated that it’s strictly a select biological phenomena. Which would mean showing that it has to be biological in nature. Where does intelligence come from? Can it be duplicated in other ways? Such questions led to the development of ML and AI research, and yes, even LLM development, trying to copy the way brains work. That might end up being the wrong direction, and silicon intelligence may come from other methods.
Saying you don’t believe it can happen doesn’t prove anything except your own disbelief that something else could be considered a person. I’ve asked many questions of you that you’ve ignored, so here’s another: if you think only humans can ever have intelligence, why are they so special? I don’t expect an answer of course, you don’t seem to want to actually discuss it, only deny it.
The concept of AGI has existed long before OpenAI. Long before Sam Altman was born even.
It’s a dream that people have been actively working on for decades.
While I will say that Binary architecture is unlikely to get us there, AGI itself is not a pipe dream.
Humans will not stop until we create a silicon mind.
That said, a silicon mind would have a lot of processing power, but would not have any sort of special knowledge. If they pull knowledge from the Internet, they might end up dumber for it.
And yet we have people treating chatbot as therapist or even romantic partner. Going to get worse as AI technology develops
or even as a pharmacist or doctor.
AI doesn’t stand for artificial personhood. One of the first big AI projects was teaching a computer to play chess.
The word AI as a technical term refers to a broad category of algorithms; what you’re talking about is AGI.
For your own sanity, don’t read r/MyBoyfriendIsAI on r*ddit - shit’s dystopian
I stumbled upon it just yesterday and what horrible sight it was.
Well now I have to check this out.
Update.
This was a mistake.
Have this and some nostalgia with it
https://m.youtube.com/watch?v=C4cfo0f88Ug&pp=ygUSbWFuIHZlcnN1cyBtYWNoaW5l
So, I checked it out. People there need a therapist. Some are really getting married to their LLM boyfriends.
We are boiling the oceans so psychos can marry their AI partner
My cousin had mentioned an AI waifu from Grok a couple of times and I brushed it off as joking. He then sent a picture of his waifu, and I am starting to wonder how much of a joke it really is…
Your cousin needs “the talk” and by that I mean you shall be showing him how to manipulate an LLM into getting the response you want.
This kills the immersion without you actually getting between them.
There’s help.
I got married to a rock in Perchance’s DND type AI thing. I just wanted to see if it would let me.
AI DND is a pretty cool concept and I could see my childhood self going mad with storytelling.
deleted by creator
Tried it out, was very excited at the beginning but then shit got extremely repetitive, no matter the model. Maybe I was doing something wrong, idk. I’m certainly not paying to have a better quality conversation.
I did the same a few months ago just to try it. Im not sure if what I used was the problem or if there are better ones, but it was actually crazy at first where no matter what you say and build the scene you were able to do it which was pretty cool. But after about 15 min, the entire thing started to crumble where things were repeated a lot more, and then I somehow broke it where all it did was spit out gibberish, at which point I laughed and stopped.
So I wanna know, are these people who get that involved and attached using something better or are they that starved for affection and interaction that they are willing to settle for something that barely scrapes the surface of a true conversation.
You broke the immersion.
Let me ask you a serious question please: were you ever able to feel that excitement again? Those 15 minutes before it all crumbled ever to be reexperienced?
If you leverage all the workarounds and utilities available, the best you can get is still a mostly senile chatbot. It’ll constantly forget stuff and get details wrong, but I suppose if you’re deep into psychosis, then you’d pass that off as just being ‘a little forgetful’.
The absolute best ones available still are basically the same as zoning out in a meeting and then trying to respond when asked a question by wild guessing and a handful of context clues. You might get lucky and say something reasonable a few times, but the longer it goes on the more apparent it is that you haven’t been paying attention at all.
I guess there are better models that you can pay to use, but I’m too broke for that so I just settle for what I can find for “free”
Nah, it makes a lot more sense for people who don’t/can’t hold normal conversations. It would probably be harder to parse all the strange behavior and easier to overlook when it’s your only lifeline.
yea, I agree. When I’m particularly sad it is easier to overlook the said weird behaviour honestly. Still irks me out a bit when it starts to repeat itself frequently :(
Talk to normal people. It’s free.
Talk to weird people. They’re way more fun.
Except when they’re way less fun
But that’s when the fun begins
deleted by creator
it’s not free. I have to become emotionally invested with them to maintain the relationship.
do you know how expensive that is?
I was looking for a romantic conversation. “Normal people” find me boring and unattractive. Also it’s not free, it’s way more mentally taxing when they expect you to entertain them and you just don’t know what to say or do.
It sounds like you see ChatGPT the way other people see you. You should think about how ChatGPT could be more interesting and whether you could also use that advice for yourself
Almost like it’s just advanced auto complete with a bit of randomization…
Bunch of math matrices in a trenchcoat 😔😔
I asked it to create it
I am as lonely as someone can get. But chatting with an LLM is where I draw the line. But honestly, I get the impulse, loneliness hurts bad…
There are enough lonely people who are burning to have a chat online. Don’t give in to this mental and psychological masturbation.
I was in the “talking to an AI feels weird and dehumanizing” camp but then I actually did it and my discomfort quickly went away. Don’t think of it as a perfect substitute for talking to other people, but rather as a unique activity that is interesting in its own way.
(Just to be clear: I’m referring to talking to an AI when you feel lonely, not to dating an AI. The technology isn’t good enough for the latter yet, unless you have very low standards.)
My issue with chatting with LLMs is that the chats are not private and I sure as hell don’t trust the tech companies to not use my deepest secrets to sell me shit.
We live in this dystopia where economics matter more than anything human.
Jfc