Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    23 minutes ago

    one thing i did not see coming, but should have (i really am an idiot): i am completely unenthused whenever anyone announces a piece of software. i’ll see something on the rust subreddit that i would have originally thought ā€œthat’s coolā€ and now my reaction is ā€œgreat, gotta see if an llm was usedā€

    everything feels gloomy.

  • jaschop@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    3 hours ago

    So, Copilot for VSCode apparently got hit with an 8.8 CVE in November for, well, doing Copilot stuff. (RCE if you clone a strange repo and promptfondle it.)

    Fixes were allegedly released on Nov 12th, but I can’t find anything in the Changelog on what those changes were, and how they would prevent Copilot from doing, well, Copilot stuff. (Although I may not be ITSec-savvy enough to know where such information would be found.)

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    edit-2
    14 hours ago

    Skynet’s backstory is somehow very predictable yet came as a surprise to me in the form of this headline by the Graudain: ā€œMusk’s AI tool Grok will be integrated into Pentagon networks, Hegseth saysā€.

    The article doesn’t provide much more other than exactly what you’d expect. E.g this Hegseth quote, emphasis mine: ā€œmake all appropriate data available across federated IT systems for AI exploitation, including mission systems across every service and componentā€.

    Me as a kid: ā€œhow could they have been so incompetent and let Skynet take over?!ā€

    Me now: ā€œOh. Yeah. That checks outā€

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    19 hours ago

    my promptfondler coworker thinks that he should be in charge of all branch merges because he doesn’t understand the release process and I think I’m starting to have visions of teddy k

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      18 hours ago

      thinks that he should be in charge of all branch merges because he doesn’t understand the release process

      …I don’t want you to dox yourself but I am abyss-staringly curious

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        17 hours ago

        I am still processing this while also spinning out. One day I will have distilled this into something I can talk about but yeah I’m going through it ngl

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      17 hours ago

      i am continuously reminded of the fact that the only things the slop machine is demonstrably good at – not just passable, but actively helpful and not routinely fucking up at – is ā€œgenerate getters and settersā€

    • BurgersMcSlopshot@awful.systems
      cake
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      edit-2
      20 hours ago

      OpenTofu scripts for a PostgreSQL server

      statement dreamed up by the utterly deranged. They’ve played us for fools

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    edit-2
    1 day ago

    (One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control

    I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn’t as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.

    It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they’ve stuck with their 2027 year of big events happening.

    One paragraph I came up with a sneer for…

    Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it ā€œmisalignedā€ because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.

    Given the Trump administration, and the US’s behavior in general even before him… and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and ā€œAgent-4ā€ over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn’t.

    Also random part I found extra especially stupid…

    It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.

    LLM ā€œagentsā€ currently can’t coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we’re supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      24 hours ago

      My Next Life as a Rogue AI: All Routes Lead to P(Doom)!

      The weird treatment of the politics in that really read like baby’s first sci-fi political thriller. China bad USA good level of writing in 2026 (aaaaah) is not good writing. The USA is competent (after driving out all the scientists for being too ā€œDEIā€)? The world is, seemingly, happy to let the USA run the world as a surveillance state? All of Europe does nothing through all this?

      Why do people not simply… unplug all the rogue AI when things start to get freaky? That point is never quite addressed. ā€œConsensus-1ā€ was never adequately explained it’s just some weird MacGuffin in the story that there’s some weird smart contract between viruses that everyone is weirdly OK with.

      Also the powerpoint graphics would have been 1000x nicer if they featured grumpy pouty faces for maladjusted AI.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      1 day ago

      It’s darkly funny that the AI2027 authors so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0. Can you imagine that the administration that’s sueing the current Fed chair (due for replacement in May this year) is gonna be able to constructively deal with the complex robot god they’re conjuring up? ā€œAgent-4ā€ will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        13 hours ago

        so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.

        I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.

        ā€œAgent-4ā€ will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

        You know, if there is anything I will remotely give Eliezer credit for… I think he was right that people simply won’t shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesn’t take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      1 day ago

      the incompetence of this crack oddly makes me admire QAnon in retrospect. purely at a sucker-manipulation skill level, I mean. rats are so beige even their conspiracy alt-realities are boring, fully devoid of panache

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      edit-2
      16 hours ago

      Man, it just feels embarrassing at this point. Like I couldn’t fathom writing this shit. It’s 2026, we have ai capable of getting imo gold, acing the putnam, winning coding competitions… but at this point it should be extremely obvious these systems are completely devoid of agency?? They just sit there kek it’s like being worried about stockfish going rogue

    • Henryk Plƶtz@chaos.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      Ā·
      23 hours ago

      @scruiser I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware? Like, it both is completely inert until you supply it computing power, *and* it’s essentially just one large matrix multiplication on steroids?

      If you keep that in mind you can do things like https://en.wikipedia.org/wiki/Ablation/_(artificial/_intelligence) which I find particularly funny: You isolate the vector direction of the thing you don’t want it to do (like refuse requests) and then subtract that vector from all weights.

      Screenshot from West World showing the Dolores Abernathy robot with the phrase "Doesn't look like anything to me" below.

  • macroplastic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    1 day ago

    I’ve been made aware of a new manifesto. Domain registered September 2024.

    Anyone know anything about the ludlow institute folks? I see some cryptocurrency-adjacent figures, and I’m aware of Phil Zimmerman of course, but I’m wondering what the new grift angles are going to be, or whether this is just more cypherpunk true believer stuff.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      edit-2
      1 day ago

      I scolled around the ā€œludwell instituteā€ a bit for fun. Seems like a pretty professional opinion piece/social media content operation run by one person as far as I can tell. I read one article, where they lionized a jailed BitCoin Mixer developer. Another one seems to be hyped for Ethereum for some reason.

      Seems like pretty unreflected ā€œI make money by having this opinionā€ stuff. They lead with reasonable stuff about using privacy-respecting settings or tools, but the ultimate solution seems to be becoming OpSec paranoid and using Tor and Crypto.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          20 hours ago

          I’m sorry but whatever you think about the actual content I’m going to be prescriptive and proclaim that the word manifesto should not be allowed to refer to opinions about management practices.

        • BurgersMcSlopshot@awful.systems
          cake
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          23 hours ago

          Which is absolutely tragic given the cargo culting of ceremonies at any large software organization that make up big-A Agile, ceremonies that started as a reaction to the agile manifesto. One place I worked for even started turning non-engineering teams into Agile teams because it’s Agile!

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        Ā·
        edit-2
        1 day ago
        CW: state of the world, depressing

        (USA disappears 60k untermensch in a year; three minorities massacred successively in Syria; explicit genocide in Palestine richly documented for an uncaring world; the junta continues to terrorise Myanmar; Ukrainian immigrants kicked back into the meat grinder with tacit support of EU xenophobia; entire Eastern Europe living under looming Russian imperialism; EU ally Turkey continues to ethnically cleanse Kurds with no consequences; El Salvador becomes police state dystopia; Mexico, Equador, Haiti, Jamaica murder rates lowkey comparable to warzones; AfD polling at near-NSDAP levels; massacre in Sudan; massacre in Iran; Trump declares himself president of Venezuela and announces Greenland takeover; ecological polycrisis accelerates in the background, ignored by State and capital)

        techies: ok but let’s talk about what really matters: coding. programming is our weapon, knowledge is our shield. cryptography is the revolution…

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    edit-2
    2 days ago

    It has happened. Post your wildest Scott Adams take here to pay respects to one of the dumbest posters of all time.

    I’ll start with this gem

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      1 day ago

      sorry Scott you just lacked the experience to appreciate the nuances, sissy hypno enjoyers will continue to take their brainwashing organic and artisanally crafted by skilled dommes

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        edit-2
        2 days ago

        There was a Dilbert TV show. Because it wasn’t written wholly by Adams, it was funny and engaging, with character development, a critical eye at business management, and it treated minorities like Alice and Asok with a modicum of dignity. While it might have been good compared to the original comic strip, it wasn’t good TV or even good animation. There wasn’t even a plot until the second season. It originally ran on UPN; when they dropped it, Adams accused UPN of pandering to African-Americans. (I watched it as reruns on Adult Swim.) I want to point out the episodes written by Adams alone:

        1. An MLM hypnotizes people into following a cult led by Wally
        2. Dilbert and a security guard play prince-and-the-pauper

        That’s it! He usually wasn’t allowed to write alone. I’m not sure if we’ll ever have an easier man to psychoanalyze. He was very interested in the power differential between laborers and managers because he always wanted more power. He put his hypnokink out in the open. He told us that he was Dilbert but he was actually the PHB.

        Bonus sneer: Click on Asok’s name; Adams put this character through literal multiple hells for some reason. I wonder how he felt about the real-world friend who inspired Asok.

        Edit: This was supposed to be posted one level higher. I’m not good at Lemmy.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            1 day ago

            as a youth I’d acquired this at some point and I recall some fondness about some of the things, largely in the novelty sense (in that they worked ā€œwithā€ the desktop, had the ā€œboss keyā€, etc) - and I suspect that in turn was largely because it was my first run-in with all of those things

            later on (skipping ahead, like, ~22y or something), the more I learned about the guy, the harder I never wanted to be in a room as him

            may he rest in ever-refreshed piss

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        ok if I saw ā€œevery male encounter is implied violenceā€ tweeted from an anonymous account I’d see it as some based feminist thing that would send me into a spiral while trying to unpack it. Luckily it’s just weird brainrot from adams here

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        1 day ago

        woo takes about quantum mechanics and the power of self-affirmation

        In retrospect it’s pretty obvious this was central to his character: he couldn’t accept he got hella lucky with dilbert happening to hit pop culture square in the zeitgeist, so he had to adjust his worldview into him being a master wizard that can bend reality to his will, and also everyone else is really stupid for not doing so too, except, it turned out, Trump.

        From what I gather there’s also a lot of the rationalist high intelligence is being able to manipulate others bordering on mind control ethos in his fiction writing,

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      2 days ago

      it’s not exactly a take, but i want to shout out the dilberito, one of the dumbest products ever created

      https://en.wikipedia.org/wiki/Scott_Adams#Other

      the Dilberito was a vegetarian microwave burrito that came in flavors of Mexican, Indian, Barbecue, and Garlic & Herb. It was sold through some health food stores. Adams’s inspiration for the product was that ā€œdiet is the number one cause of health-related problems in the world. I figured I could put a dent in that problem and make some money at the same time.ā€ He aimed to create a healthy food product that also had mass appeal, a concept he called ā€œthe blue jeans of foodā€.

      • Rackhir@mastodon.pnpde.social
        link
        fedilink
        arrow-up
        5
        Ā·
        1 day ago

        @sansruse @V0ldek You left out the best part! šŸ˜‚

        Adams himself noted, ā€œ[t]he mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.ā€[63] The New York Times noted the burrito ā€œcould have been designed only by a food technologist or by someone who eats lunch without much thought to tasteā€.[64]

      • Fish Id Wardrobe@social.tchncs.de
        link
        fedilink
        arrow-up
        3
        Ā·
        1 day ago

        @sansruse @V0ldek honestly, in the list of dumb products, this is mid-tier. surely at least the juicero is dumber? literally a device that you can replace with your own hands.

        i mean, obviously the dilberito is daft. but it’s a high bar.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        2 days ago

        Not gonna lie, reading through the wiki article and thinking back to some of the Elbonia jokes makes it pretty clear that he always sucked as a person, which is a disappointing realization. I had hoped that he had just gone off the deep end during COVID like so many others, but the bullshit was always there, just less obvious when situated amongst all the bullshit of corporate office life he was mocking.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          1 day ago

          I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Alice’s anger isn’t a legitimate response to the bullshit work environment she has but just haha angry woman funny.

          Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          2 days ago

          It’s the exact same syndrome as Yarvin. The guy in the middle- to low-end of the corporate hierarchy – who, crucially, still believes in a rigid hierarchy! has just failed to advance in this one because reasons! – but got a lucky enough break to go full-time as an edgy, cynical outsider ā€œtruth-teller.ā€

          Both of these guys had at some point realized, and to some degree accepted, that they were never going to manage a leadership position in a large organization. And probably also accepted that they were misanthropic enough that they didn’t really want that anyway. I’ve been reading through JoJo’s Bizarre Adventure, and these types of dude might best be described by the guiding philosophy of the cowboy villain Hol Horse: ā€œWhy be #1 when you can be #2?ā€

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          2 days ago

          I had hoped that he had just gone off the deep end during COVID like so many others

          If COVID made you a bad person – it didn’t, you were always bad and just needed a gentle push.

          Like unless something really traumatic happened – a family member died, you were a frontline worker and broke from stress – then no, I’m sorry, a financially secure white guy going apeshit from COVID is not a turn, it’s just a mask-off moment

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        2 days ago

        The New York Times noted the burrito ā€œcould have been designed only by a food technologist or by someone who eats lunch without much thought to tasteā€.

        Jesus christ that’s a murder

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      2 days ago

      i love articles that start with a false premise and announce their intention to sell you a false conclusion

      The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape.

      The future of automated stupidity is being set right now, and the path we’re on leads to other companies being stupid instead of us. I want to change that.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    edit-2
    2 days ago

    From r/bonaroo in 2024, when the sun was really insisting upon itself.

    alt text

    Furby smoking a marijuana. A caption says: ā€œVibes, but at what costā€

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    3 days ago

    when I saw that they’d rebranded Office to Copilot, I turned 365 degrees and walked away