Like, every AI generated thing I’ve seen, when viewed from the eyes of someone who actually knows what they’re doing, is at best below average. Maybe some things aren’t quite as bad as the general “AI slop”, but of the things I’m actually experienced in (code and art), I just see so many amateur mistakes in everything AI.
Regarding art, AI can make really visually appealing things, but it gets the details wrong. That’s something that a below average artist does. And regarding code, it’s the same thing. Overall, it has the appearance of decent code, but it gets the details wrong, just like a below average dev. (Probably about the level of a high school senior or college freshman.)
I’m not super experienced at writing, but I can also tell that it’s not very good at that. The stories it writes just aren’t compelling, but I’m not experienced enough to tell you why. And the same with music. It’s just below average, but I couldn’t tell you why.
I’m not trying to sound elitist by saying this, but I’ve noticed people who aren’t very good at these things tend to praise how good the AI is.
So, is it just me, or are the big fans of AI just below average at whatever the AI is doing?
No, you’re just overestimating where the average is.
People are vastly stupider and more useless than you give them credit for, and in a slop saturated world we’ll all get even more stupid and useless due to rampant brain rot.
We’ll reach human level artificial intelligence not by making LLMs intelligent (that’s fundamentally impossible with their design), but by lowering humanity’s average intelligence until it’s below that of a slime mold.
And a significant percentage of humanity is already there.
The Oatmeal had a wonderful post about AI art last year that captures so many of my own feelings around this: A cartoonist’s review of AI art
Wow, that is beautiful. So yeah, he and I mostly agree. I would say that AI probably should be heavily restricted, because right now it’s putting the entire economy into a really precarious place, and it’s also developed through extremely extensive copyright infringement. But yeah, that’s a great take.
I say let all the AI tech bros jam it into everything they want. When the bubble pops and all the giant corporations pushing this shit collapse it will free up space for a bunch of new little guys to move in and grow. THOSE are the ones that need to be restricted to make sure things never get so centralized again.
Hi, I’m a little guy! :) https://port87.com/
I hope Google and Microsoft never financially recover, so email is truly free (as in freedom) again.
PS: Restrict me, daddy.
PPS: I’m actually very in favor of restrictions for email providers. We should all have to play by the same rules.
I would agree, yeah.
I think a big driving force is that people who are drawn to generative AI are more likely to be mediocre at a thing, as well as demoralised by the effort required to improve.
I can sympathise with that drive, at least. After all, I’m a pretty mediocre writer. I desperately wish I could be better, but I am so far away from where I’d like to be that it feels hopeless sometimes.
Sometimes I wish that I believed that it actually was hopeless, because then I could just give up on trying rather than having to bear the pain of practicing my way out of mediocrity. However, I care more about improving than I do about my discomfort, and so I keep going with the XP grind.
A big thing that keeps me going is that I have seen the power of practice. I’m still far from where I’d like to be (and no doubt when I reach that point, my ambition will have grown along with my skill such that I will still be satisfied), but I’m able to look back on my efforts of the last few years and see real progress.
That’s why I find people who use generative AI to be quite tragic — they’re like alternate timeline versions of myself. It’s more comfortable to believe that the reason you’re not good at things is because there are people who are Good at it, and people who are Bad at it. If it’s a case of immutable categories of capability, then you have an excuse not to try. What’s especially tragic is that when these demoralised novices use generative AI, that’s often because they still have that drive to create inside themselves.
But man, it sucks to see, because I know that they will never find the satisfaction they crave in these tools. Sure, they might make something they’re proud of, giving them a facsimile of fulfillment — but it won’t compare to what they could be feeling. When I argue against generative AI, I’m not just being anti-AI, but pro-Art. Actually, no, it’s more than that — I’m pro-passion. If they could cultivate the kind of vulnerability required to actually use and develop their inner passion, then I would treasure any piece of art or writing generated through that process. I might not enjoy the art itself, in its own right, but I don’t need to, because what I love most about art is that it’s a fundamentally human process, and so any creative work is best enjoyed with that context
Ugh, it drives me mad. I just want to grab them by the shoulders and shake them, while yelling “PLEASE COME AND JOIN US. I GENUINELY WANT TO SEE WHAT PASSIONS DRIVE YOUR URGE TO CREATE. I KNOW IT HURTS TO BE MEDIOCRE, BUT YOUR PASSIONS ARE WORTH PERSISTING FOR. WE’VE ALL BEEN THERE, AND WE WANT YOU HERE WITH US SO THAT WE CAN HELP SUPPORT YOU”. Alas, screaming at someone like this is not an effective evangelisation strategy — even if you tell them that we throw better parties, and that they’re invited
I wish I could upvote this more than once.
I know some intelligent and artistic people who use AI, and some lazy people at well. I know folks who have niche intelligence and general intelligence who use it and don’t. It’s almost like literally everything else, where subsets of the population will either use it or not use it, and the “it” itself is not some determining factor in deciding the value of a person.
This thread is just another Lemmy superiority circle jerk, but hey, here we are. So jerk away guys.
I also don’t use it, don’t like it, but I also don’t judge people based on whether or not they use it, because I do plenty worth judging myself.
This isn’t about people who merely use it. It’s about people who love it and praise it. I use it, but I also understand how terrible it is at everything it does, so I use it sparingly and in very specific contexts.
I think people usually use genAI to cut corners. Rather than learn the skill themselves (and develop the sense of what makes the result good/bad), they just go with the zero-effort option.
It’s not the same if scientists use AI as tool to create new materials, vaccines, genetics and in other investigations, solving problems which in traditional manner with millons of data need a lot of years, or an dumb user got even dumber, substituting the own creativity and intelligence with an AI app. AI can be an usefull tool for certain tasks, no an substitute for human capabilities. The problem with AI is this, not the AI as such, but it’s abuse, to convert it to an hype, to obtain user data by big corps, to manipulate and control decisions. The correct use of AI need human intelligence.
LLMs are per definition “mediocre machines”. They are a statistical approach. The most common answer is far from the single best answer.
Thank goodness its not just me
Its the conundrum. “When i ask (random slop machine) its so smart and gives me answers!!”
“Did you ask it things you already know?”
“No. But look at the answers!!”
People have no idea how much they’re damaging their brains.
The only person around me (that I know) that uses AI is me and the company mechanic. I only ever use it as an easier ‘image search’ to find the source of manga/anime or similar things. He only uses it to figure out what brand/model machine he needs to work on so he uses it to find the manual pdfs.
I feel like we’re using it the ‘right way,’ but I feel like we’re not actually using the AI part so…
Sounds like you’re both using a search engine that has the word “AI” slapped on it.
Yeah, I use it through gemini (only cause pixel and bottom button press is easy) and he (unfortunately 🤮) uses grok, but we (or at least I do) only use it for image/object recognition. Really useful for that at least, although again, calling it AI might be a misnomer.
There’s definitely a correlation between the understanding/misunderstanding of what it does and the understanding/misunderstanding of what it’s capable of.
If you understand even the basics of how an LLM works you understand that it’s not capable of much more than mediocre at a consistent level because even with the best possible training data and a lack of bad training data it’s ability to"hallucinate" is based on how incomplete the data set it’s trained on is.
When it “hallucinates” it does so because it doesn’t have a direct answer it can generate for whatever query is given. This is because humans can’t generate a complete data set of everything that is without flaw.
So they are as average as we are if we do not attempt to better ourselves. The difference is that they can’t better themselves because they do not learn. And even further than that, we can try to better the training data, but they still lack the ability to understand anything including context which makes it a useless task. The man power of humans working together might be able to get pretty close to a training set that would raise an AI LLM above the average human, but the LLM can’t maintain that trajectory by itself and literally nothing else would get done.
Jesus fucking christ dude I can see why you were so offended by my comments about overly confident people on both sides of the argument being wrong. This is complete nonsense you are just making up.
Offended? Darling. I don’t care about you other than I find you amusing. So edgy. So angry.
And you went as far as to stalk my comments. Obsessed much?
Darling. I don’t care about you other than I find you amusing. So edgy. So angry.
Well that is an interesting thing to say… Anyways are you going to address that you have just made all this up on the spot there is no reason to be this confidently incorrect and painfully bellow average.
I’m not your therapist, bud. You could have ended this “discussion” at any time. You continue to choose to engage me. I don’t know what you’re expecting but like. You aren’t even good at being annoying. That’s why you’re so funny.
Did you vote for trump? I am noticing a pattern
I’m not the one quoting his twitter posts in my username. But nice try at trying to psychoanalyze me.
It wasn’t deep it was just that you both seem to be reality challenged. Buddy I “psychoanalyzed” you a while ago when I said you were projecting your feelings.
But are you going to address that you are just making shit up? Or are you just going to be mad I challenged your safe space echo chamber?
The thing that no one every talks about in the software industry is how the majority of software developers are just barely good enough to get by.
I spent 10 years consulting and there are entire companies out there where nobody even knows what high quality code looks like.
LLMs are trained on all this so they produce at the same level. For most developers they don’t know the difference between good code and code that works (but is low quality).
In a world where no one cares about the code, and only cares that the product works (badly), LLMs are perfect.
I write code that no one is going to look at, ever (yet it goes in production).
Oh, I can recognize good code from code that works… I’m just not skilled enough to produce the former. (Does that put me ahead of most people by default?)
To me, one of the best ways to close that gap is the book The Pragmatic Programmer. Its old and if you ask me its still as valuable as ever. It’s not about any particular language. It’s about how to write high quality code in any language.
I’ve never met a real human person that loves AI. I’ve used it in very specific circumstances. I’ve met other people who’ve used it. But every one of those people share some variation of my opinion - It’s useful for very few things and trash for the 99% of other things. I don’t know who these lovers of AI are but I bet it’s the same handful of idiots who all run in one or two social circles reinforcing each other’s opinions on everything. If a person’s ideas can’t be challenged or they surround themselves with people exactly like themselves their minds are doomed to atrophy. Humans are coded to save energy. Talent and skill takes long grueling effort. AI allows the lazy to phone it in which allows the midiocre to cosplay as the talented. But AI is a tool. For people with reading/writing difficulties it can bridge gaps that previously required much more effort and many more resources. That independence has value, but AI is not a replacement for the novelist. Anyone who says that the sun shines out of AI’s ass or the sun never sets on AI or whatever BS they’re spouting is either a snake oil salesman or their mark. Neither should be given much oxygen.
As an artist myself, when it comes to generated images, it’s weird and uncanny with the mistakes it makes.
It’s not the kinds of things a below average artist gets wrong, image gen gets things wrong in very specific ways. It aims for perfection on everything, but unlike a human, the algorithm has no understanding of what it’s trying to make.
If a below average artist makes mistakes, I still have a pretty good idea what they’re going for, because a human working on art has some real world understanding that every other human has, a big one being object persistence.
Yeah, the mistakes it makes are often different, but it makes mistakes in details just like a below average artist. The most common mistakes I see in real artists are things like inconsistent lighting, proportions, perspective, etc, and the AI can usually do those things alright, but it struggles with other details, like consistent anatomy, shapes, materials, etc.
It’s similar in code. Like, a human being isn’t going to add a dependency that doesn’t exist, but that’s the kind of mistake an AI will make all the time. Some mistakes, like removing a function call it’s not supposed to to fix a failing test case, are mistakes a human would make, just like humans make anatomical mistakes in art all the time too.
So it’s not that the AI makes the exact same mistakes a below average human makes, but more about how often it makes mistakes, just like a below average human does.
No matter how rudimentary one is at art, a human will always understand that things in the background are independent of things in the middle and foreground. AI’s obsession with making everything symmetric and balanced always results in the most repulsive uncanny valley looking slop. About the only thing it comes close to getting is abstract patterns but anyone can do the same thing with 25 year old software.
I had someone tell me it allowed them to “make the best app they’ve ever made”. It was a bootstrap CRUD task app.
Wow, yes. I think it goes both ways though, relying on the AI for the human part of your work (design, writing) makes you more stupid. But yes my direct boss is an Elon fanboy, ChatGPT devotee and his thinking is slow. He’s not exactly stupid, there is stuff he’s good at, but doesn’t quickly make connections, and it sure seems like it’s related to the ChatGPT.








