Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Decemberās finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)


Itās almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.
The documentation for āTurbo modeā for Google Antigravity:
No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. Itās not even named similarly to dangerous modes in other software (like āforceā or āyoloā or ādangerā)
Just a cool marketing name that makes users want to turn it on. Heck if Iām using some software and I see any button called āturboā Iām pressing that.
Itās hard not to give the user a hard time when they write:
But really theyāre up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user āwell in our small print somewhere we used the phrase āGemini can make mistakesā so why did you enable turbo mode??ā
yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good
but it is very fucking funny to watch them FAFO
After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: āNever let an LLM have any decision-making power.ā At most, LLMs will serve as a heuristic function for an algorithm that actually works.
Unlike the railroads of the First Gilded Age, I donāt think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, itās not worth spending lots of money on a task where you donāt need reliability.
The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?
The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true āuse casesā to be mainly spam, and perhaps students cheating on homework.
Pessimistically I think this scourge will be with us for as long as there are people willing to put code āthat-mostly-worksā in production. It wonāt be making decisions, but weāll get a new faucet of poor code sludge to enjoy and repair.
I know it is a bit of elitism/priviledge on my part. But if you donāt know about the existence of google translate(*), perhaps you shouldnāt be doing vibe coding like this.
*: this of course, could have been a LLM based vibe translation error.
E: And I guess my theme this week is translations.
E2: another edit unworthy of a full post, noticed on mobile have not checked on pc yet, but anybody else notice that in the the searchbar is prefilled with some question about AI? And I dont think that is included in the url. Is that search prefilling ai advertising? Did the subreddit do that? Reddit? Did I make a mistake? Edit: Not showing up on my pc, but that uses old reddit and adblockers. EditnrNaN: Did more digging, I see the search thing on new reddit on my browser, but it is the AI generated ārelated answersā on the sidebar (the thing I complained about in the past, how bad those AI generated questions and answers are). So that is a mystery solved.