Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

    • jonhendry@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      edit-2
      4 hours ago

      I can see it making sense, what with CPUs moving to integrated RAM, and probably CPU-integrated flash, to maximize speed. The business of RAM and flash drive upgrades will become a very large but shrinking retrocomputing niche probably served by small Chinese fabs.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      2 hours ago

      tl;dr: AI! Agents! AI! Agents! AI! Agents! AI…

      Just one thing that caught my attention:

      AI code review helps developers. We … found that 72.6% of developers who use Copilot code review said it improved their effectiveness.

      Only 72.6%? So why the heck are the other almost 30% of devs using it? For funsies? They don’t say.

      You’d think due to self selection effects most people who wouldn’t find using Copilot effective wouldn’t use it.

      The only way that number makes sense to me is if people were force to use Copilot and… no, wait, that checks out.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      3 hours ago

      Computer scientist Louis Rosenberg argues that dismissing AI as a ā€œbubbleā€ or mere ā€œslopā€ overlooks the tectonic technological shift that’s reshaping society.

      ā€œPlease stop talking about the bubble bursting, I haven’t handed off my bag yetā€

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      4 hours ago

      We are three paragraphs and one subheading down before we hit an Ayn Rand quote. This clearly bodes well.

      A couple paragraphs later we’re ignoring both the obvious philosophical discussion about creativity and the more immediate argument about why this technology is being forced on us so aggressively. As much as I’d love to rant about this I got distracted by the next bit talking about how micro expressions will let LLMs decode emotions and whatever. I’d love to know this guy’s thoughts on that AI-powered phrenologist features a couple weeks ago.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      edit-2
      12 hours ago

      i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein’s rape island. generally, credible publications should have Ē=0. this one, after a very quick look, has Ē=2, and also hosts sabine hossenfelder.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      15 hours ago

      I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    1 day ago

    Another day, another instance of rationalists struggling to comprehend how they’ve been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

    A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn’t really engage with the fact the Anthropic has lied and broken ā€œAI safety commitmentsā€ to rationalist/lesswrongers/EA shamelessly and repeatedly:

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

    I feel confused about how to engage with this post. I agree that there’s a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is ā€œspunā€ in uncharitable ways.

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

    I think it’s sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

    I would find this all hilarious, except a lot of the regulation and some of the ā€œAI safety commitmentsā€ would also address real ethical concerns.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      14 hours ago

      This would be worrying if there was any risk at all that the stuff Anthropic is pumping out is an existential threat to humanity. There isn’t so this is just rats learning how the world works outside the blog bubble.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        10 hours ago

        I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren’t really so relevant anymore, they served their role in early incubation.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      17 hours ago

      If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don’t think they understand that, given their penchant for 10k word blog posts.

      One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don’t care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the ā€œchopstickā€ stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

      I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

      I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      2 days ago

      He came by campus last spring and did a reading, very solid and surprisingly well-attended talk.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      2 days ago

      Always thought she should have stuck to acting.

      (I know, Hayek just always reminds me of how people put his quotes over Hayeks image, and people just get really mad at her, and not at him. Always wonder if people would have been just as mad if it was Friedrichs image and not Salmas due to the sexism aspect).

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    2 days ago

    something i was thinking about yesterday: so many people i respect used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:

    1. wrong tool for the job
    2. bad tool
    3. are you fucking serious?
    4. environmental impact
    5. ethics of how the data was gathered/curated to generate[1] the model
    6. privacy policy of these companies is a nightmare
    7. seriously what is wrong with you

    they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user’s brain.

    anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences — ā€œoh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series.ā€

    additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?


    1. they call this ā€œtrainingā€ but i try to avoid anthropomorphising chatbots ā†©ļøŽ

    • megaman@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      4 hours ago

      At work, i watched my boss google something, see the ā€œai overviewā€ and then say ā€œwho knows if this is rightā€, and then read it and then close the tab.

      It made me think about how this is how like a rumor or something happens. Even in a good case, they read the text with some scepticism but then 2 days later they forgot where they heard it and so they say they think whatever it was is right.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      edit-2
      1 day ago

      Sadly web search, and the web in general, have enshittified so much that asking ChatGPT can be a much more reliable and quicker way to find information. I don’t excuse it for anything that you could easily find on wikipedia, but it’s useful for queries such as ā€œwhat’s the name of that free indie game from the 00s that was just a boss rush no you fucking idiot not any of this shit it was a game maker thing with retro pixel style or whatever ughā€ where web search is utterly useless. It’s a frustrating situation, because of course in an ideal world chatbots don’t exist and information on the web is not drowned in a sea of predatory bullshit, reliable web indexes and directories exist and you can easily ask other people on non-predatory platforms. In the meanwhile I don’t want to blame the average (non-tech-evangelist, non-responsibility-having) user for being funnelled into this crap. At worst they’re victims like all of us.

      Oh yeah and the game’s Banana Nababa by the way.

    • jonhendry@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      Ā·
      1 day ago

      ā€œthey call this ā€œtrainingā€ but i try to avoid anthropomorphising chatbotsā€

      You can train animals, you can train a plant, you can train your hair. So it’s not really anthropomorphising.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      2 days ago

      Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      2 days ago

      Is there any search engine that isn’t pushing an ā€œAI modeā€ of sorts? Some are more sneaky or give option to ā€œopt outā€ like duckduckgo, but this all feels temporary until it is the only option.

      I have found it strange how many people will say ā€œI asked chatgptā€ with the same normalcy as ā€œgooglingā€ was.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      2 days ago

      Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?

      it's the doom bathroom

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      The follow-up is also funny:

      image description below

      image description

      quote post from same poster: ā€œGrok fixed it for me:ā€

      quoted post: ā€œPeople were hating on Gemini’s floor plan, so I asked Grok to make it more practical.ā€

      An AI slop picture of a house floorplan at the top melding into a perspective drawing of a room interior below.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      I don’t see the problem, that looks like a typical McMansion to me.

      Also, it’s nice the AI included a dedicated room for snorting cocaine (powder room).

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    3 days ago

    Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

    Few IT projects are displays of rational decision-making from which AI can or should learn.

    Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what ā€œgoodā€ actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

    The article continues to talk about how we can’t do IT, and wraps up with

    It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term ā€œsoftware crisisā€ was coined

    It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

    https://spectrum.ieee.org/it-management-software-failures

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      3 days ago

      Considering the sorry state of the software industry, plus said industry’s adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.

      That’s probably an extreme position to take, but IT as it stands is a serious liability - one that AI’s set to make so much worse.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        2 days ago

        For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.

        Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.

        At the smaller scale, well. ā€œNo warranty or fitness for any particular purposeā€ is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.

        I’m not sure will ever escape ā€œmove fast and break thingsā€ this side of a civilisation-toppling catastrophe. Which we might get.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          2 days ago

          I’m not sure will ever escape ā€œmove fast and break thingsā€ this side of a civilisation-toppling catastrophe. Which we might get.

          Considering how ā€œvibe codingā€ has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            edit-2
            2 days ago

            In the land of the blind the one-eyed man will make a killling as an independent contractor cleaning up after this disaster concludes.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    3 days ago

    Bubble or Nothing | Center for Public Enterprise h/t The Syllabus, dry but good.

    Data centers are, first and foremost, a real estate asset

    They specifically note that after the 2-5 year mini-perm the developers are planning on dumping the debt into commercial mortgage backed securities. Echoes of 2008.

    However, project finance lawyers have mentioned that many data center project finance loans are backed not just by the value of the real estate but by tenants’ cash flows on ā€œbooked-but-not-billingā€ terms — meaning that the promised cash flow need not have materialized.

    Echoes of Enron.