Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    27
    Ā·
    13 days ago

    All participants in the Stubsack, including awful.systems regulars and those joining from elsewhere, are reminded that this is not debate club. Anyone tempted by the possibility of debate-club behavior is encouraged to touch your nearest grass immediately. We are here to sneer, not to bicker: This is a place to mock the outside world, not to settle grand matters of ideology, unless the latter is done in an extraordinarily amusing way.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    14 days ago

    Haven’t seen this skeet posted here. Skeet:

    It’s 2050 and a teen girl is torrenting a .tar.gz file of all the consciousnesses of all the tech bros who uploaded themselves into the cloud in a bid for immortality and modding them into The Sims 4

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    edit-2
    16 days ago

    My dad was a bit freaked out by a video version (We’re not ready for super-intelligence)of the ā€œAI 2027ā€ paper, particularly finding two end scenarios a bit spooky: colossus-style cooperating AIs taking over the world, and the oligarch concentration of power one, which i think definitely echoed sci-fi he watched/read as a teen.

    In case anyone else finds it useful these are the ā€œComments as I watch itā€, that I compiled for him


    Before watching Video Notes:

    • AI Only channel with only 3 videos

    • Produced By ā€œ80000hoursā€, which is an EA branch (trying to peddle to you the best way to organize 40years * 50 weeks * 40 hours [I love that they assume only 2 weeks of holidays]); which is definitely cult adjacent: https://80000hours.org/about/#what-do-we-do. Mostly appears to be attempting to steer young people to what they believe are ā€œHigh impactā€ jobs.


    Video Notes:

    • The backing paper is a bit of a joke, one ā€œAI 2027ā€, for reference one of the main authors is very much a ā€œcult memberā€, Scott Alexander Siskind, author of ā€œSlate Star Codexā€ and ā€œAstral Codex Tenā€.

    • Other authors include [AI Futures Project] :

      • Daniel Kokotajlo (podcast co-host of siskind, ex open-ai employee, LessWrong/EA regular)
      • Thomas Larsen (ex MIRI [Machine Intelligence Research Institute = really really culty], LessWrong/EA regular)
      • Eli Lifland (LessWrong/EA regular)
      • Romeo Dean (Astra Fellowship recipient = money for AI Safety research, definitely EA sphere)
    • A lot of fluff trying to hype up the credentials of the authors.

    • AGI does not have a bounded definition.

    • They are playing up the China angle to try and drum up jingoistic support.

    • Exaggerating Chat GPT-3 success, by merely citing ā€œusersā€, without mentioning actual revenue, or actual quality.

    • Quote:

      How do these things interact, well we don’t know but thinking through in detail how it might go is the way to start grappling with that.

      -> I think this epitomises the biggest flaw of their movement, they believe that from ā€œfirst-principlesā€ it’s possible to think hard enough (without needing to confront it to reality) and you can divine the future.

      -> You can look up ā€œPrediction Marketsā€, which is another of their ontological sins.

    • I will note that the prediction of ā€œAgentsā€ was not a hard one, since this is what all this circle wants to achieve, and as the video itself points out it’s fantastically incompetent/unreliable.

    • Note: This video was made before the release of GPT-5. We don’t know precisely how much more compute altogether GPT-5 truly required, but it’s very incremental changes compared to GPT-4. I think this philosophy of ā€œMore trainingā€ is why OpenAI is currently trying (half-succeeding half failing) to raise Trillions of dollars to build out data-centers, my prediction is that the AI bubble bursts before these data centers come to fruition.

    • Note: The video assumes keeping models secret, but in reality OpenAI would have a very vested interest in displaying capability, even if not making a model available to the public. Also even on consumer models, OpenAI currently loses a bunch of money for every query.

    • Note: The video assumes ā€œSingularitarianismā€, of ever acceleration in quality of code, and that’s why they keep secret models. I think this hits a compute/energy wall in real life, even if you assume that LLMs are actually useful for making ā€œqualityā€ code. These ideas are not new, and these people would raise alarms about it with or without current LLM tech.

    • Specific threats of ā€œBio-weaponā€, which a priori can not really be achieved without experimentation, and while ā€œautomatedā€ labs half exis, they still require a lot of human involvement/resources. Technically grad students could also make deadly bioweapons, but no one is being alarmist about them.

    • Note: ā€œAgent 2ā€ Continuous Online learning is gobbledyremoved, that isn’t how ML, even today works. At some point there are very diminishing returns, and it’s a complete waste of time/energy to continue training a specific model, a qualitative difference would be achieved with a different model. I suspect this sneakily displays ā€œSingularitarianismā€ dogma.

    • Quote:

      Hack into other servers Install a copy of itself Evade detection

      -> This is just science-fiction, in the real world these models require specialized hardware to be run at any effective speed, this would be extremely unlikely to evade detection. Also this treats the model as a single entity with single goals, when in reality any time it’s ā€œrunā€ is effectively a new instance.

    • Note: This subculture loves the concept of ā€œscience in secrecyā€, which features a lot in the writings of Elizer Yudkowsky. Which is cultish both in keeping their own deeds ā€œin a veil of secrecyā€, and helpful here when making a prophecy/conspiracy theory, by making the claim hard to disprove specifically (it’s happening in secret!)

    • Note: Even today Chain-of-thought is not that reliable at explaining why a bot gives a particular answer. It’s more analog to guiding ā€œsearchā€, rather than true thought as in humans anyway. Them using ā€œAlien-Languageā€ would not be that different.

    • Agent 3, magically fast-and-cheap, assuming there are now minimum energy requirements. Then you can magically run 200,000 copies of. magically equivalent to 50,000 humans sped up by 30x. (The magic is ā€œexplainedā€ in the paper by big assumptions, and just equating essentially how fast you can talk with the quality of talking, which given the length of their typical blog posts is actually quite funny)

    • Note: ā€œAlignmentā€ was the core mission of MIRI/Eliezer Yudkowsky

    • Note: Equating Power and Intelligence a lot (not in this video, but in general being suspiciously racist/eugenicist about it), ignoring the material constraints of actual power [echo: Again the epitomical sin of ā€œIf you just think hard enoughā€]

    • Note: Also assuming that trillions of dollars of growth can actually happen, simultaneously with millions losing their jobs.

    • I am betting that the ā€œThere is anotherā€ part of the video is probably deliberately echoing Colossus.

    • The video casually assumes that the only limits to practical fusion and nanotech just intelligence (instead of potential dead-ends, actually the nanotech part is a particular fancy of theirs, you can lookup ā€œdiamondoid bacteriaā€ on LessWrong if you want a laugh)

    • The two outcomes at the end of the video are literally robo-heaven and robo-hell, and if you just follow our teachings (in this case slow-downs on AI) you can get to robo-heaven. You will notice they don’t imagine/advocate for a future with no massive AI integration into society, they want their robo-heaven.

    • Quote:

      None of the experts are disagreeing about a wild future.

      -> I would say specifically some of them are suggesting that AGI soon is implausible quite strongly. I think many would agree that right now the future looks dire with or without super-AI, or even regular AI.


    Takeaway section:

    Yeah this really is a cult recruitment video essentially.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      16 days ago

      We’re almost at the end of 2025 and agents don’t fucking exist the way they predicted. Literally 0% acc so far. Ai 2027 agmi.

      ^image of Daniel K who already updated his rapture prophecy to 2029 because he’s a mark

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      16 days ago

      I stumbled onto that vid a while back, watched the first minute or so, lol’ed at the glazing of kokotajlo, and stopped the vid. I did think about posting it here to be torn apart but forgot about it. I watched a little bit further and got ā€œthey chose to write this as a narrativeā€ of course they fucking did. It’s their one thing. Write a shitty 10k word story that amounts to some combination of ā€œreally makes you thinkā€ and ā€œbig if trueā€.

      Here’s a story: Once upon a time there was a world. In it people were sad. Then one day swlabr was elected supreme benevolent ruler and then nobody was sad again :) the end. Wow make u think. Many experts agree

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    13 days ago

    today in I fucking called it fedora aka mostly red hat has decided to allow slop code in a way that violates even their utterly mid stated principles around the tech

    if you’re downstream from any fedora packages (and I don’t know the scope of this policy so it might be safe to consider anything owned by red hat in general to be tainted — yes I realize most of us are downstream from a bunch of red hat shit) it might be time to evaluate an alternative if available

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    13 days ago

    New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. […] 45% of all AI answers had at least one significant issue.

    • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.

    • 20% contained major accuracy issues, including hallucinated details and outdated information.

    • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.

    https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content

    And yet the BBC still has a Programme Director for ā€œGenerative AIā€ who gets trotted out to say ā€œWe want these tools to succeedā€. No, we don’t, you blithering bellend.

  • antifuchs@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    12 days ago

    In lighter news, this anti-LLM rhyme made me chuckle:

    I will not talk with a chatbot
    I do not want it while I shop

    I do not want it on Windows X-box
    I do not want it in Firefox

    I do not want it in my house
    I do not want it on my mouse
    I do not want it here or there
    I do not want it anywhere.

    I do not want AI and Spam
    I do not want them Sam-Alt-Man

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      13 days ago

      of course the organization I know primarily for platforming fascists and astroturfing on YouTube was secretly an even worse grift and somehow tied in with Yarvin, why wouldn’t it be

      given that Rossmann’s at the head of this thing too, I’m starting to regret not taking GrapheneOS (who, notably, were also a target for this grift) seriously when they said Rossmann’s involved in a bunch of terrible shit. the right to repair deserves a better figurehead.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        13 days ago

        fuckin pisses me off, given his clippy campaign is helping move pivot shirts

        sigh

        I WILL NOT CHANGE, CLIPPY SUCKED FIRST

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          13 days ago

          Damn right. He needs to quit, he’s the one who sucks.

          The fash don’t have magic doodoo fingers that obligate decent people to surrender every time they touch something we like, and we should never concede as if they do.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        13 days ago

        hadn’t been aware that rossman’s into dodgy stuff (knew fairly little about him outside of some repair stuff on his channel), but ugh

        also clicking through into FUTO’s projects and it’s all a bit gravitating around a point, ā€œbuilt on polycentricā€. so I wonder what that means?

        Polycentric is an open-source, distributed social network that lets you publish content to multiple servers.

        already at ā€œI’m interestedā€ because it’s interesting to see what other work happens in this space.

        and then very next sentence we get to

        If you’re censored on one server, your content remains accessible from other servers

        ah. I see. the ā€œopt-out moderationā€ is also telling - how does it work? who knows! it’s got a paragraph under introduction but seems to not be mentioned anywhere else in the docs.

        extra frustrating to see because the projects these fucks are taking on (like the open cast thing) are items that sorely need stronger options in the open space. but not like this. never like this.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            10 days ago

            certainly has more than a bit of that urbit coiner Sovereign Individual shit going on yeah

            I tried looking around a bit to see if I could find any info about contributors there, and for the most part none of them really seem to have much internet fingerprint at all. did find one person with a moderately extensive set of personal repo/project commits spanning back a few years, spanning long enough so as to find that they were doing a BSc/Hons/something circa 2018. which isn’t concrete but does strongly hint at a current age of mid 20s to mid 30s. ā€œget 'em while they’re young and you can poison their brains early!ā€ - the bayfucker mantra

    • veganes_hack@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      13 days ago

      god damn it. i guess the name of the founder might have been a hint, only one letter away from our favorite roman saluter.

      i use immich, one of the projects they seem to have actually funded in a big way. it’s a very good selfhosted replacement for google photos. at least the license is actually open source, as opposed to grayjay, so here’s hoping it has a future in case the fascists try to fuck with it.

      i guess the problem though isn’t with the funding and/or control of individual projects, it’s with the long-term influence in the foss community they seem to be after.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      edit-2
      13 days ago

      i had a feeling about FUTO because of rossmann’s involvement. became leery of him after this youtube bullshit from 2018:

      Let’s discuss why journalists are afraid of Elon Musk right now(and why they deserve to be)

      Elon Musk wants to come up with a way to rate the credibility and accuracy of media organizations & individual journalists. This blatant misrepresentation of his words, given in the middle of this conversation, is a PERFECT example of WHY this is so badly needed in modern society.

      I’m not a fan of Tesla for being, in many ways, the ā€œApple of cars.ā€ That being said, whether or not I like Tesla when it comes to a repair standpoint has nothing to do with the hate being thrown at Elon for something he never meant in the words he said, and is entirely separate from my agreement with him on the idea of a media credibility rating platform.

  • PMMeYourJerkyRecipes@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    16 days ago

    This is not a sneer so much as a sneer request; anyone know of any good articles written about the total hypocrisy of the Free Speech brigade since the inauguration? By far the most anti-speech environment in decades and most of them are still just whining about pronouns on campus or whatever.

    (Yes; FIRE has passed this very basic test and has occasionally switched topics from whining about ā€œleftist professorsā€ to saying stuff like ā€œit’s not great that we’re deporting people for writing articles for their school paper about how genocide is badā€. Literally everyone else is a hypocrite)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    12 days ago

    For something lighter, here’s an AI bro getting wowed by the shittiest ā€œvideo gameā€ I’ve ever seen (trust me, the screenshot doesn’t do it justice):

    In lieu of sneering this shit, I’d like to argue that arts education should become mandatory for all students post-bubble, regardless of their profession. STEM, humanities, tech, doesn’t matter - give them four years of art so they don’t turn out like this guy.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      14 days ago

      Closely related is a thought I had after responding to yet another paper that says hallucinations can be fixed:

      I’m starting to suspect that mathematics is not an emergent skill of language models. Formally, given a fixed set of hard mathematical questions, it doesn’t appear that increasing training data necessarily improves the model’s ability to generate valid proofs answering those questions. There could be a sharp divide between memetically-trained models which only know cultural concepts and models like Gƶdel machines or genetic evolution which easily generate proofs but have no cultural awareness whatsoever.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      14 days ago

      Every time I hear a moderate AI argument (e.g. AI will be an aid for searching literature or writing code), it’s like, ā€œLook, it’s impressive that the AI managed to do this. Sure, it took about three dozen prompts over five hours, made me waste another five hours because it generated some completely incorrect nonsense that I had to verify, produced an answer that was much lower quality than if I had just searched it up myself, and boiled two lakes in the process. You should acknowledge that there is something there, even if it did take a trillion dollars of hardware and power to grind the entire internet and all books and scientific papers into a viscous paste. Your objections are invalid because I’m sure things are gonna improve because Progress.ā€

      I am doubly annoyed when I turn my back and they switch back to spouting nonsense about exponential curves and how AI is gonna be smarter than humans at literally everything.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      14 days ago

      Wouldn’t f(x) = x^2 + 1 be a counterexample to ā€œany entire (differentiable everywhere) function that is never zero must be constantā€? Or are some terms defined differently in complex analysis than in the math I learned?

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    14 days ago

    New paper on LLMs just dropped, titled LLMs Can Get ā€œBrain Rotā€!

    Currently a novelty at this point, but could prove useful to make the likes of Iocaine and Nepenthes more effective - especially since the paper notes:

    the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning.

    It does also suggest doing some actual quality control to prevent damage to the LLMs, but that sure ain’t happening

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      13 days ago

      Does anyone else get flashbacks to that episode of the Powerpuff girls where the villain takes over the city and makes a law that ā€œcrime is now legalā€? Because that keeps popping into my head for some reason.