

Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.
In simpler terms, it jailbreaks LLMs by speaking in Business Bro.
Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.
In simpler terms, it jailbreaks LLMs by speaking in Business Bro.
āAnother thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubbleās wide-spread harms [ā¦] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while.ā
Well, it appears Iāve fucking called it - Iāve recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:
You want my opinion on this small-scale debacle, Iāve got two thoughts about this:
First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI artās uniquely AI-like sloppiness, and chatbotsā uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AIās detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.
Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As Iāve already noted, the LLM bubbleās undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash thatās built up against AI, and youāve got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.
My only hope for this is that the GPUs in these CDO spiritual successors become dirt cheap afterwards.
They hopefully will, since the end of the AI bubble will kill AI for good and crash GPU demand.
Bonus: He also appears to think LLM conversations should be exempt from evidence retention requirements due to āAI privilegeā (tweet).
Hot take of the day: Clankers have no rights, and that is a good thing
Sidenote: The rats should count themselves extremely fucking lucky theyāve avoided getting skewered by South Park, because Parker and Stone would likely have a fucking field day with their beliefs
Apparently linkedinās cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
This sounds like its going to be horrible
Zack of SMBC has thoughts on it:
Ah, good, Iāll just take his word for it, the thought of reading it gives me psychic da-
the authors at one point note that in 1984, Big Brotherās listening device means there is two way communication, and so the people have a voice. He wonders why Orwell didnāt think of this.
The closest thing I have to a coherent response is that Boondocks clip of Uncle Ruckus going āRead, nigga, read!ā (from Stinkmeaner Strikes Back, if youāre wondering) because how breathtakingly stupid do you have to be to miss the point that fucking hard
ābiological civilization is about to create artificial superintelligenceā is it though?
Iām gonna give my quick-and-dirty opinion on this, donāt expect a lengthy defence.
Short answer, no. Long answer: no, intelligence cannot be created by blindly imitating it with mere silicon
āMusic is just like meth, cocaine or weed. All pleasure no value. Donāt listen to music.ā
(Considering how many rationalists are also methheads, this joke wrote itself)
As a matter of fact, someoneās noted Ed Zitron had called this back in September:
Ed Zitronās planning a follow-up to āThe Subprime AI Crisisā:
(Its gonna be a premium column, BTW)
EDIT: Swapped the image for one thatās easier-to-read
This is pure speculation, but I get the feeling Microsoftās gonna significantly downsize, if not collapse, by the decadeās end.
This recent moveās gonna kneecap Microsoftās ability to function as a company, and their heavy investment into AI mean theyāll likely take the brunt of the impact when the bubble bursts.
New blogpost from Iris Meredith: Vulgar, horny and threatening, a how-to guide on opposing the tech industry
New thread from Baldur Bjarnason publicly sneering at his fellow programmers:
Anybody who has been around programmers for more than five minutes should not be surprised that many of them are enthusiastically adopting a tool that is harmful, destroying industries, sabotaging education, and hindering the energy transition because they feel itās giving them a moderate advantage
That they respond to those pointing some of this out with mockery (ānutsā, āshove your concern up your assā) and that their peers see this mockery as reasonable discourse is also not surprising. Tech is entirely built on the backs of workers with no regard for externalities or second order effects
Tech is also extremely bad at software. We habitually make fragile, insecure, complex, and hard to maintain code that backs poor UIs. The best case scenario is that LLMs accelerate already broken software dev processes in an industry that is built around monopolies and billionaire extremists
But, sure, feeling discouraged by the state of the industry is ālike quitting carpentry as a career thanks to the invention of the table sawā
Whatever
EDIT: Found out where Baldur got the ātable sawā quote from - added it accordingly.
Artificial intelligence and cheating/lying: two great tastes that go together
New thread from Ed Zitron, gonna focus on just the starter:
You want my opinion, Zitronās on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.
For AI supporters specifically, I expect a triple whammy of mockery:
On one front, theyāre gonna be publicly mocked for believing tech billionairesā bullshit claims about AI, and publicly lambasted for actively assisting tech billionairesā attempts to destroy labour once and for all.
On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.
On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, āliterally cannot tell good from badā.
Dr. Abeba Birhane got an AI True Believertm email recently, and shared it on Bluesky:
You want my opinion, I fully support acausal robot deicide, and think AI rights advocates can go fuck themselves.
Alright thatās it: anime streaming needs to return to fansubbing
Fansubs are openly doing it for the love of the anime, so chances are theyād avoid AI slop like the plague (though the CHUDs would be okay with ChatGPT subs if it meant avoiding The Woketm)
(note: this link contains a skintight anime bosom so donāt open it in front of your boss unless your boss is chill)
Good thing Iām a fucking NEET, then
ā¦Lotus, you clever bastard.
The AI bros were right - AI is creating new business opportunities /s
If someoneās using AI, its a sign that theyāre (a) Nigerian Prince levels of gullible and (b) an anti-human tech asshole who fundamentally does not respect labour. Scamming these kinds of people is a moral duty.