Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      3 hours ago

      How many people, if they were given $1.3 million just once in their lifetime, would figure out far better uses for that money than this guy?

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        40 minutes ago

        Coincidentally, it came up in conversation last night that the head of AI at Northeastern University makes $1.3 million a year (I don’t know where that number came from, but it’s what I heard, and it’s apparently the second-highest salary at the university, exceeded only by the president’s).

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        2 hours ago

        you give me 1.3 million dollars and I’ll fuck off on a motorcycle for the rest of my natural life and that would still be a better value for the money than whatever the fuck this is.

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    edit-2
    20 hours ago

    zulip added slop to their codebase a long time ago (1, 2) but now they’ve released this bullshit blog post with some choice nonsense:

    I seriously considered banning LLM use for Zulip contributions. But our view is that contributors should be allowed to use modern tools in the service of producing great, reviewable work. AI-assisted work is of course subject to the same rigorous review processes we’ve always used for community contributions.

    So we decided to invest in creating, refining, and enforcing a new AI use policy, which has the following key tenets:

    • End-to-end human responsibility for work and the communication around it. You always need to understand, test, and explain the changes you’re proposing to make, whether or not you used an LLM as part of your process to produce them.
    • Clear and concise communication about points that actually require discussion. While we allow carefully edited AI-generated PR descriptions, we’ve had to ban AI-generated chat messages in the development community as too disruptive. Manual enforcement of this policy has been rough, with far more PRs closed without review, stern warnings, and outright bans of repeat offenders than we’ve ever had to apply before. (What do you do when someone apologizes for submitting AI slop… by copy-pasting an apology from ChatGPT, including surrounding quotation marks?) We expect that next fall, automation or other major changes will be required for the PR triage process to be manageable.

    The results [of using Claude] were promising (and far better than just a few months prior) — enough for us to start investing in teaching Claude Code how to self-review its work, and how to produce PRs that are easy for maintainers to review. This has largely been an AI-supported process of digesting our contributor documentation into CLAUDE.md, and iterating when we see the model struggle.

    i liked zulip šŸ˜ž

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      17 hours ago

      I’m not going to start a punch-up with a dev team or maintainer who believes that AI tools can help good programmers do good work or whatever, but time and again we see that, just like crypto before it, you aren’t inviting good programmers to work with you. You’re inviting the bros. AI bros and crypto bros are a specific type of Guy. I’m sure there were dotcom bros in the 90s. This is not a new problem, even if the current economic circumstances makes being this type of Guy more viable than ever, apparently.

      It’s not just that the tech is bad (though it is bad), it’s that it’s uniquely privileged by culture and economics to empower the worst assortment of morons and grifters outside of Wall Street (and also inside of Wall Street, because of fucking course it does).

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    1 day ago

    New(ish) Baldur Bjarnason - a fairly politically charged one at that, going into the US hegemony powering the current tech industry (and the AI bubble by extension), and how the Hormuz crisis is all-but guaranteed to topple the whole thing.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      14 hours ago

      I particularly appreciate the argument he makes about the tech industry pivoting from creating value to exercising control. I disagree that this trend is specific to the tech industry, but with the possible exception of Monsanto they have been the most successful at it.

      With the obvious failings of the American state to perform it’s basic duties and the cross-pollination of the American political and corporate elites it seems plausible that at least some factions in the tech industry are awaiting an opportunity to take advantage of this weakness they’ve created and exercise that control over the functions of the state directly. I feel like I should be saying this into a webcam from behind a cartoonishly-large desk in between shilling for nutritional supplements, but I’d be lying if I said I didn’t fear what rough beast, it’s hour come at last, slouches towards Bethlehem to be born.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    1 day ago

    There’s a… robust debate about LLM slop submissions on everyone’s favorite boiled crustacean site.

    First shot fired: a promptfondler suggest suppressing all comments pointing out that a submission reeks of slop by flagging them as ā€œoff-topicā€ [1]

    ā€œThis is written by an LLMā€ comments should be flagged as off-topic (80 net upvotes, 139 comments)

    Riposte: a suggestion that posing LLM generated content should be a bannable offence:

    LLM generated submissions should be disallowed (274 net upvotes, 108 comments)

    So far it looks as if the anti-slop forces have opinion on their side.


    [1] short explanation of how flagging of comments work on lobste.rs - it’s sort of a downvote, but the flagger has to chose from a list of reasons. If a commenter accrues enough flags they’ll get a red warning banner, and might possibly be banned as disruptive.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      1 day ago

      OK here’s a followup, which I’m putting out here as there’s probably a higher proportion of neurodivergent people here than in other fora I frequent

      A commenter on lobste.rs states that being anti-LLM is effectively being against neurodivergent individuals, because many such individuals express themselves in prose in a way that’s indistinguishable from LLM output.

      Is this a widespread viewpoint?

      https://lobste.rs/s/wee21u/this_is_written_by_llm_comments_should_be#c_nadrad

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        17 hours ago

        I was trying to reply by way of linking a piece by Robert Kingett that had been shared here some time ago that, in excruciating detail and with righteous fury distilled to cold analysis, explained why AI is absolute shit for accessibility aids. His experience is in the realm of physical disability rather than neurodivergance, but that only makes the problems more starkly illustrated rather than unique.

        Unfortunately I couldn’t find that piece, but I found this one and needed to explain to the kid why I randomly laughed out loud.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        15 hours ago

        I recall seeing someone elsewhere on the fedi trying to drum up a point like that a few weeks ago, their complaint was something like ā€œI’ve been chased out of neurodivergent spaces for not being enough into LLMsā€

        No idea if their claim was true; I can definitely see the possibility of some ND neurotypes slanting more favourable, but nfi on the values

        Not sure I buy the ground for that argument anyway tho. Lotta people used to smoke and society slapped all manner of regulation on that

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        24 hours ago

        I called it out as lies and bullshit, the poster asserted it was totally true and I asked for numbers to support this statistical claim.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          23 hours ago

          And instead of providing numbers, they came back with an anecdote about university administrators being incompetent (which is deeply unsurprising and thus, in the Shannon sense, conveys no information).

      • flere-imsaho@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        1 day ago

        this is obvious bullshit: theoretically, my writing is affected by two factors that might skew the assessment towards it having been generated by an llm: i’m neurodivergent (adhd) and english is not my native language – and i was never accused of using synthetic text generators…

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    2 days ago

    In other Scott of Siskind news, he just posted an entirely unnecessary amount of words to aggressively push back against the adage that ā€œall exponentials sooner or later turn into sigmoidsā€ as if it was by itself a load bearing claim of the side arguing against the direct imminence of the machine god.

    It’s just a bunch of arguing by analogy ( ā€œhelping you build intuitionā€ ) and you-can’t-really-knows while implying AI 2027 was very science much rigorous, but it also feels kind of desperate, like why are you bothering with this overperformative setting-the-record-straight thing, have you been feeling inadequate as an AI-curious stats fondler of note lately?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      he just posted an entirely unnecessary amount of words

      taking a quick look at it… it’s actually short by Scott’s standards, but still overly long, given that the only point he makes is claiming Lindy’s Law is applicable to predicting AI progress in absence of other information. Edit: glancing at it again… its not that short, I kinda skimmed until I got to Scott’s actual point my first time glancing at it. You can’t blame me for not reading it.

      you-can’t-really-knows

      Yeah, he straw-mans AI critics/skeptics as trying to make an argument from ignorance, then tries to argue against that strawman using Lindy’s Law (which assumes ignorance and a pareto distribution). He completely ignores that AI critics are actually making detailed arguments about LLM companies consuming all the good and novel training data, hitting the limits on what compute costs they can afford, running into problems of the long lead time for building datacenters, etc. Which is pretty ironic given his AI 2027 makes a nominal claim to accounting for all that stuff (in actuality it basically all rests on METR’s task horizons, and distorts even that already questionable dataset).

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        edit-2
        14 hours ago

        Building infinite compute is hard, man

        As if LLMs being the last step before AGI/ASI/The Metal Messiah is a foregone conclusion. As far as I can tell even the AI 2027 thing only argues that once the bots completely nail down programming (any minute now) then the foom happens and the models will magic themselves into true AI, because apparently being good at solving coding problems is a sufficient proxy for superintelligence, hence the METR infatuation.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          17 hours ago

          I mean, to be fair that’s not unique to them - software engineers have been worse than physicists in assuming that all of reality and human experience is downstream from their chosen field.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      The idea of ā€œthe exponential curve goes up foreverā€ has always been silly and an idea rooted in capitalism for me (ā€œno bro you don’t get it we’re gonna get infinite money foreverā€). Limited resources exist, and people are already very fed up with the ludicrous amounts of water and electricity data centres take up. Making bigger models that need to run for longer is also probably going to take an exponential amount of resources (and also make people hate you more).

    • ivyastrix@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      Fran has done some really great writing on this, really admire her ability to deconstruct a community she’s fond of.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    3 days ago

    Apparently, the American Physical Society is revising their AI policy to allow ā€œbroader applicationsā€ than the ā€œlight editingā€ they currently permit.

    https://indico.global/event/16413/contributions/153970/attachments/69779/135365/JSayre-Pheno2026.pdf#page=8

    I currently have a review request sitting in my inbox from them. I’m thinking of using this as a reason to decline that request.

    I would rather quit physics than accept the institutional endorsement of skill-destroying, environmentally disastrous fashtech.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      looking very much forward to that crashing head first into arXiv threatening a ban if your chatbot fucks up in your name

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        2 days ago

        I was pretty happy about seeing that news about arXiv! So much news has been various organizations giving into LLM usage like some kind of inevitability, so it was a nice change of pace.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      3 days ago

      It is this continuing slippage of standards that makes me appreciate a hard line against any and all genAI that place like awful.systems have. You concede one small usage and the boosters will keep pushing for more.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        3 days ago

        Yeah the first AI comes in all nice and friendly but if you dont toss them out before you know it you turn out to he an AI bar.

        (Also noticed that a lot of ā€˜I just want some nuanced talks’ friendly looking ai bros are not friendly at all when they keep getting pushback).

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          2 days ago

          But I listened and agreed that you had serious concerns about certain aspects of this technology. I even agreed when you talked about how frustrating it was that specifically other people wanted to do bad things. I listened as you asked whether I had any options to address those concerns! What more do you want from me before you agree to let me do and say whatever I want!

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    4 days ago

    Prompt goblins insist that we’re backward and irrelevant. Why do they crave our sweet delicious approval?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      3 days ago

      The plagiarism, massive expenditure of venture capital, and unreliable slop output are all intrinsic to the technology, and they hate to be reminded of that because there isn’t much they can do about it. From a technological standpoint, even locally run community fine-tuned open-weight models still originated from plagiarism and big corporate investments, and still output slop. From a social standpoint, the most the can do is try to claim legitimacy through consensus building and we are a threat to that.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        edit-2
        4 days ago

        freshwater

        This reminded me of a few old comic stories were eventually the robot/computer was partially running on blood.

        (One of them was a judge dredd one where they had vampire robots who iirc used the blood to keep a president in suspended animation alive. Snap, Crackle and Pop, it had a suprisingly wholesome ending for a dredd comic).

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      3 days ago

      No one is stopping any one from editing out jar jar, if they care that much, just do it. Put up or shut up. /s

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      3 days ago

      This may be code for ā€œI don’t want to see uppity women, brown people, and queer people in my shows.ā€

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      3 days ago

      One of the motivations for fanfiction is that people want more ā€œfillerā€. They like the characters and (often) the world those characters inhabit, and so they write a story that lets them (and other fans) spend more time with the fiction.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        3 days ago

        The whole slice-of-life subgenre is all about this. No real conflict or plot, just scenes of the characters existing in their world. My wife both reads and writes that kind of thing and let me tell you the level of research and worldbuilding that goes into writing a simple meal scene or whatever.

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      4 days ago

      So in highschool, I was one of those annoying kids that went ā€œwhy do we have to learn how to analyze poems? We’re never gonna need this in real lifeā€ in English (well… German, but doesn’t matter) class.

      I’m deeply grateful for my teachers back then to patiently get me to do these things anyways, because there came a point in my life years later where I suddenly understood that those ā€œuselessā€ lessons and hours ā€œwastedā€ analyzing Goethe and Borchert and Fitzgerald handed me the tools to understand media (and not just literature!) instead of just consuming it.

      I hope it’s clear how that relates to the screenshot. More than that though, I sometimes feel like the slew of shit media over the past decade is at least in part to blame on writers/studios/… now assuming people do in fact merely consume. But that’s a rant that’s completely off-topic here, so I’ll shut up now.