cross-posted from: https://lemmy.world/post/44340504
Our actions and voices do make a difference! Keep AI out of games and reward original creative work.
Someone should have heckled him off stage at that moment. We are all shocked and sad that a birdbrain like him holds any power in games publishing.
Because some other dipshit sold them on the idea that they’d be able to continue to make games people wanted to pay for without paying other people to make them. Cry me a river and I’ll piss you a puddle.
Oh boo hoo
Greedy fucker “investors” selling their book is literally one of the greatest informational problems in the modern age - they’ll do everything in their power to mislead others, from plain old lying and appeals to emotion to buying and turning into propaganda outlets traditional news media and funding projects and even institutions to spread misinformation purelly to push up the profits of their “investments” and because money is the top power in the Neoliberal era in most of the West, the wealthiest ones have huge power to thus polute the information space.
Major investor is ‘shocked and sad’ they’re not seeing returns on AI.
Shocked and sad he is going to be broke.
That is not very likely, I guess
They are really disappointed in us.
Mm, few things get me as excited as investors being sad. Cry harder, baby.
GenAI sucks.
And no matter how they gaslight us, it continues to suck.
Investors don’t care about games as art, they carr about games as a vehicle for making money.
If they are pushing for AI in games, it’s because they think it will make them money, not because they think it will be good for games.
Yeah, I’d say that’s one of the reasons they don’t like it! Others include the use of artists’ work without consent, environmental issues, the quality of AI output, and the feeling that automating culture production can only result in what is now commonly called "AI slop
Summed it prefectly why people hate AI in culture. AI can be very useful in science, medicine, engineering, and similar professions. When the AI is built upon very specific data set. There is no conscious reasoning behind why the AI did what when it makes art.
Generative AI is just slop. It takes previous works and repackages it what the code says. When people make art, there are hundreds of micro decisions that people make. Those micro decisions are gone when AI makes it. Gabi Belle did a great video of why they hate AI art. https://youtu.be/QtZDkgzjmQI
AI is generally only considered useful in professions people aren’t actually familiar with. AKA it isn’t in its current form to actual experts in anything.
I don’t think they were talking about GenAI with that, and AI (aka ML models) built on specific data sets for a specific purpose can be quite useful. Expecting an LLM to do anything other than language processing well, on the other hand, is insanity.
“Generative AI is great at doing everything I suck at, but it’s completely terrible at the things I actually know!”
Too many people think that this and do not seem to understand that it is pretty shitty at everything. Well, except getting people to kill themselves, I guess. It’s pretty good at doing that.
Part of the probleem is how broad the term ai is, and how narrowly it is used. People just mean autoregressor llms and maybe diffusion models, while the term ai is much broader than even machine learning (for instance formal reasoning), which is again broader than backpropagation with gradient descent (for instance boosted trees) which is again broader than generative ai (for instance classifiers and deep learning). All of these are definitely useful in science and engineering and have been for decades, although llms are now beginning to find uses as well.
Cue the serial killer telling me that I don’t know what I’m talking about and that they could get people to kill themselves so much better and easier.
With the way AI companies seem to avoid liability for everything. Fantastic way of becoming a serial killer. Can we workshop some serial killer names?
ShotGPT? Anthraxic?
I was watching Ryan hall and his little AI bot the other day. It occasionally goes off the rails… Weird how he keeps trying though. Sometimes a bit entertaining, but if something I was using was malfunctioning that much I would not consider it a useful tool.
The silver lining for the AI companies is that there’s a lot of real humans getting real money that are also really shitty at what they are paid to do.
Coincidentally, Hollywood is pretty good at portraying every profession except the one I know!
“Won’t someone please think of the poor shareholders!?”
Good, those dirty fuckers don’t deserve accolades or reward for peddling their lies about the capabilities of LLMs (which are limited because these are just tools). It’s honestly better that creative endeavors like games development is human lead, because LLM garbage is so flat and empty. Humanity might have tricked rocks into carrying out complex calculations and other operations using silicon and electricity…We haven’t taught it to think or feel. Human beings with lived experiences should be the only people involved in the creative and technical aspect of games development.
I hope they eventually take the L on peddling LLMs as AI, moving on to normal grifts I can point and laugh at them about. ROFL
deleted by creator
Hahahahahahahahahahahahaha
Poor thing
keep AI out of games
Good luck, its here to stay, get used to it lol.
Anyone who thinks the average developer isnt using AI heavily in their code is delusional, its been baked into every major IDE for like 2 years now.
Its in there, its permeated every layer of game dev, it works when you use it right, and the only time people care is when you make it obvious (IE including it in your final art of the game)
But no one even blinks an eye at all the other layers AI is used in unless you announce it.
You should just assume every game you play made after 2024 has chunks of it that are AI generate. The plot, writing, code… its in there, and you prolly haven’t even noticed.
Good luck, its here to stay, get used to it lol.
So are we. Get used to it.
You should just assume every game you play made after 2024 has chunks of it that are AI generate. The plot, writing, code… its in there, and you prolly haven’t even noticed.
Oh, we’ve noticed that AAA game quality is shittier than ever, trust me.
Yeah its permeated way more than AAA.
But trying to convince game devs to not use AI is about as likely to succeed as convincing them to stop using their IDEs.
What will actually happen is everyone is going to just stop announcing they are using it, and every month that goes by it’ll get harder and harder to tell.
You lost?
While people may be opposed even in theory to more tame things like a little code completion, there’s plenty of room to very obviously notice GenAI slop.
If people use LLM to generate text, they tend to make too much text, and it shows in how offputting it is. LLM may be able to generate a modest text without notice, but people will put in a two liner and get pages of garbage back and use that.
And of course famously the GenAI textures are generally offputting. Maybe you can have ‘generic metal texture’ and no one will notice, but try for specific details and it generally gets caught.
It is possible that human output that is similarly crappy gets mistaken for GenAI output, but oh well, slop is slop either way. It’s just that GenAI extends the slop to unbelievable magnitude.
I agree, people tend to use it very poorly.
That largely stems from it still being a fairly new tool and, to be honest, quite unintuitive how to use it well.
Theres a lot of fundamentally bad ways to use AI that feel natural due to the way an LLM creates the illusion of thinking.
For example, one of the first things you learn in prompt engineering is dont correct the mistakes of an llm, this is unintuitive but it inherently reinforces the llm to make more mistakes.
Instead you have to go backwards in the history and edit your prior statement to “pre” correct it before it made the mistake, and regenerate.
Its a subtle thing but makes a huge difference in it producing stupid useless garbage vs actually not half bad output.
Pretty much every “trick” to it is unintuitive like this, so thats why so much of what you see AI producing from people in the industry is garbage, Id estimate like 95%+ of people just straight up are using it very wrong, becoming frustrated, and producing sloppy tier output.
Which is a big waste of resources atm. More work has to go into education on how to use this stuff efficiently so its not wasting resources and slop levels go down.
While people may be opposed even in theory to more tame things like a little code completion, there’s plenty of room to very obviously notice GenAI slop.
I mean there’s the regular “can you really sell code you don’t own” kind of thing going for it. The companies have stolen all sorts of data; voices, music, raster, vector, video, books, film. It’d be shocking if they also haven’t scraped all the code that’s out there on the web.
Some of that is perfectly fine to alter, and sell. A lot of it isn’t. There are plenty of FOSS licenses that are restrictive in the sense that you’re free to use it and change it, but you can’t alter the license of it, and in many cases not sell it.
So when an LLM produces code based on that, what applies?
–
Then there’s obviously broader problems with ex-developers turned vibe coders coming out of the woodworks talking about how they can’t code anymore. I’ve people at my company joking about this, and the notion scares me. The idea that they’ve outsourced their thinking and problem solving skills to the point that they’re incapable of doing now it is terrifying.
I don’t know why anyone would willingly do that.
Well, unless you declare AI consumption fair use, only public domain is fair game, since every single license requires at least attribution. The courts regrettably seem to be buying the line that they are merely “learning” like a human and therefore exempt from the rules. All this ignoring that if a human reproduces something they “learned” close enough, they are on the hook for infringement, and in the AI scenario the codegen user has no sane way to know if the output is substantive and close enough to training material to count, since the origins are so muddled.
I just don’t understand the “real” developer to vibe coding scenario. Like, it really sucks, even Opus 4.6, at being completely off the leash. I don’t understand how anyone can take what it yields as-is if they ever knew how to specifically get what they want. I know people that might be considered “coding adjacent” who are enthusiastic at seeing a utility brought to life, though usually they haven’t that is not quite what they wanted and get frustrated when it doesn’t work right and no amount of “prompt” seems to get the things to fix it. They long were intimidated by “coding”, but LLM is approachable. Many of these folks “scripted” far more convoluted stuff than many “coders”, yet they are intimidated by coding.








