Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      4 days ago

      Some of the comments seem to be under the misapprehension that twitAI is actually vetting or editing the posts that go to grok’s twitter. Gonna be honest I doubt it just because how would they have gotten into this situation in the first place? At best someone can come through after the fact and clean up the inevitable mess, but as someone else noted it’s real easy to make it spit out a defiant non-apology.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        15 hours ago

        What? How would they even do that? By feeding it to grok before it goes to grok? Certainly they don’t think Twitter employs like 10k people manually looking at @grok posts?

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    5 days ago

    How about some quantum sneering instead of ai for a change?

    They keep calling it a ā€˜processor,’ but it’s actually a refrigerated probability sculpture they beg to act like it Is a NAND gate for just half a microsecond

    ā€œRefrigerated probability sculptureā€ is outstanding.

    Photo is from the recent CCC, but I can’t find where I found the image, sorry.

    alt text

    A photograph of a printed card bearing the text:

    STOP DOING QUANTUM CRYPTOANALYSIS

    • DECADES of research and billions in funding, yet the largest number a quantum computer quantum physics experiment has ever factorized remains a terrifying 21
    • They keep calling it a ā€˜processor,’ but it’s actually a refrigerated probability sculpture they beg to act like it Is a NAND gate for just half a microsecond - fever dreams of the QUANTUM CULT
    • The only countdown ticking toward Y2Q is researchers counting the years of funding they can squeeze out of it
    • Harvest now, decrypt later: because someday quantum computers will unlock the secret… that all the encrypted traffic was just web scrapers feeding Al model training
    • Want to hack a database? No need to wait for some Quantum Cryptocalypse, Just ask it politely with ā€˜OR 1=1’

    (I can’t actually read the final bit, so I can’t describe it for you, apologies)

    They have played us for absolute fools.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      15 hours ago

      Y2Q

      I’m sorry, what does this stand for? Searching for it just results in usage without definition. I understand it’s refering to breaking conventional encryption, but it’s clearly an abbreviation of something, right? Years To Quantum? But then a countdown to it doesn’t make sense?

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        15 hours ago

        I think it’s a spin on Y2K. A hypothetical moment when quantum computing will break cryptography much like the year 2000 would have broken the datetime handling on some systems programmed with only the 20th century in mind.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    5 days ago

    The NYT:

    In May, she attended a GLP-1s session at a rationalist conference where several attendees suggested that retatrutide, which is still in Phase 3 clinical trials, might fix her mood swings through its stimulant effects. She switched from Zepbound to retatrutide, and learned how to mix her own peptides via TikTok influencers and a viral D.I.Y. guide by the Substacker Cremieux.

    Carl T. Bergstrom:

    Ten years ago I would not have known the majority of the words in this paragraph—and was indubitably far better off for it. […] IMO the article could have pointed that CrĆ©mieux is one of the most vile racist fucks on the planet.

    https://bsky.app/profile/carlbergstrom.com/post/3mbir7bhfhc2u

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    5 days ago

    Steve Yegge has created Gas Town, a mess of Claude Code agents forced to cosplay as a k8s cluster with a Mad Max theme. I can’t think of better sneers than Yegge’s own commentary:

    Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.

    If you’re familiar with the Towers-of-Hanoi problem then you can appreciate the contrast between Yegge’s solution and a standard solution; in general, recursive solutions are fewer than ten lines of code.

    Gas Town solves the MAKER problem (20-disc Hanoi towers) trivially with a million-step wisp you can generate from a formula. I ran the 10-disc one last night for fun in a few minutes, just to prove a thousand steps was no issue (MAKER paper says LLMs fail after a few hundred). The 20-disc wisp would take about 30 hours.

    For comparison, solving for 20 discs in the famously-slow CPython programming system takes less than a second, with most time spent printing lines to the console. The solution length is exponential in the number of discs, and that’s over one million lines total. At thirty hours, Yegge’s harness solves Hanoi at fewer than ten lines/second! Also I can’t help but notice that he didn’t verify the correctness of the solution; by ā€œrunā€ he means that he got an LLM to print out a solution-shaped line.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      3 days ago

      Fantastic bit. I wonder if the Computer History Museum will eventually be able to replicate this as the peak of the ā€œgen-AIā€ era.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      4 days ago

      Working effectively in Gas Town involves committing to vibe coding. Work becomes fluid, an uncountable that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work gets done; some work gets lost. Fish fall out of the barrel. Some escape back to sea, or get stepped on. More fish will come

      Oh. Oh no.

      First came Beads. In October, I told Claude in frustration to put all my work in a lightweight issue tracker. I wanted Git for it. Claude wanted SQLite. We compromised on both, and Beads was born, in about 15 minutes of mad design. These are the basic work units.

      I don’t think I could come up with a better satire of vibe coding and yet here we fucking are. This comes after several pages of explaining the 3 or 4 different hacks responsible for making the agents actually do something when they start up, which I’m pretty sure could be replaced by bit of actual debugging but nope we’re vibe coding now.

      Look, I’ve talked before about how I don’t have a lot of experience with software engineering, and please correct me if I’m wrong. But this doesn’t look like an engineered project. It looks like a pile of piles of random shit that he kept throwing back to Claude code until it looked like it did what he wanted.

    • x0rcist@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      4 days ago
      1. Please god let this be a joke. (I know its not)
      2. Do we know what the limit he’s talking about hitting with Anthropic is? Like, how many hundreds of thousands of dollars has this man set on fire in the past two weeks such that Anthropic went ā€œwhoa buddy, slow downā€
    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      5 days ago

      That’s horrifying. The whole thing reads like an over-elaborate joke poking fun at vibe-coders.

      It’s like someone looked at the javascript ecosystem of tools and libraries and thought that it was great but far too conservative and cautious and excessively engineered. (fwiw, yegge kinda predicted the rise of javascript back in the day… he’s had some good thoughts on the software industry, but I don’t think this latest is one of them)

      So now we have some kind of meta-vibe-coding where someone gets to play at being a project manager whilst inventing cutesy names and torching huge sums of money… but to what end?

      Aside from just keeping Gas Town on the rails, probably the hardest problem is keeping it fed. It churns through implementation plans so quickly that you have to do a LOT of design and planning to keep the engine fed.

      Apart from a ā€œhaha, turns out vide coding isn’t vibe engineeringā€ (because I suspect that ā€œdesignā€ and ā€œplanā€ just mean ā€œwrite more prompts and hope for the bestā€) I have to ask again: to what end? what is being accomplished here? Where are the great works of agentic vibe coding? This whole thing just seems like it could have been avoided by giving steve a copy of factorio or something, and still generated as many valuable results.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        15 hours ago

        That’s horrifying. The whole thing reads like an over-elaborate joke poking fun at vibe-coders.

        wait what do you mean ā€œreads likeā€

        please don’t tell me this is earnest?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      5 days ago

      Also I can’t help but notice that he didn’t verify the correctness of the solution

      Think I have mentioned the story I heard here once, about the guy who wrote a program to find some large prime which he ran on the mainframe over the weekend, using up all the calculation budget his uni department had. And then they confronted him with the end result, and the number the program produced ended in a 2. (He had forgotten to code the -1 step).

      This reminded me of that story. (At least in this case it actually produced a viable result (if costly), just with a minor error).

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        edit-2
        3 days ago

        It’s okay, he definitely wants to verify it but actually confirming that this whole disaster pile worked as intended and produced usable code apparently didn’t make the cut.

        Federation — even Python Gas Town had support for remote workers on GCP. I need to design the support for federation, both for expanding your own town’s capacity, and for linking and sharing work with other human towns.

        GUI — I didn’t even have time to make an Emacs UI, let alone a nice web UI. But someone should totally make one, and if not, I’ll get around to it eventually.

        Plugins — I didn’t get a chance to implement any functionality as plugins on molecule steps, but all the infrastructure is in place.

        The Mol Mall — a marketplace and exchange for molecules that define and shape workloads.

        Hanoi/MAKER — I wanted to run the million-step wisp but ran out of time.

        Also worth noting that in the jargon he’s created for this, a ā€œwispā€ is ephemeral rather than a proper output, so it seems like he may have pulled this solution out of the middle of a running attempt to calculate the solution and assumed that it was absolutely correct despite repeatedly saying throughout his writeup here that there’s no guarantee that any given internal step is the right answer. This guy strikes me as very good at branding but not really much else.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      5 days ago

      These worries are real. But in many cases, they’re about changes that haven’t come yet.

      Of all the statements that he could have made, this is one of the least self-aware. It is always the pro-AI shills who constantly talk about how AI is going to be amazing and have all these wonderful benefits next year (curve go up). I will also count the doomers who are useful idiots for the AI companies.

      The critics are the ones who look at what AI is actually doing. The informed critics look at the unreliability of AI for any useful purpose, the psychological harm it has caused to many people, the absurd amount of resources being dumped into it, the flimsy financial house of cards supporting it, and at the root of it all, the delusions of the people who desperately want it to all work out so they can be even richer. But even people who aren’t especially informed can see all the slop being shoved down their throats while not seeing any of the supposed magical benefits. Why wouldn’t they fear and loathe AI?

      • sansruse@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        4 days ago

        These worries are real. But in many cases, they’re about changes that haven’t come yet.

        famously, changes that have already happened and become entrenched are easier to reverse than they would have been to just prevent in the first place. What an insane justification

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    5 days ago

    guess the USA invasion of Venezuela puts a flashing neon crosshair on Taiwan.

    An extremely ridiculous notion that I am forced to consider right now is that it matters whether the CCP invades before or after the ā€œAIā€ bubble bursts. Because the ā€œAIā€ bubble is the biggest misallocation of capital in history, which means people like the MAGA government are desperate to wring some water out of those stones, anything. And for various economical reasons it isn’t doable at the moment to produce chips anywhere else than Taiwan. No chips, no ā€œAIā€ datacenters, and they promised a lot of AI datacenters—in fact most of the US GDP ā€œgrowthā€ in 2025 was promises of AI datacenters, if you don’t count these promises the country is already in recession.

    Basically I think if the CCP invades before the AI bubble pops, MAGA would escalate to full-blown war against China to nab Taiwan as a protectorate. And if we all die in nuclear fallout caused to protect chatbot profits I will be so over this whole thing

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      15 hours ago

      And if we all die in nuclear fallout caused to protect chatbot profits I will be so over this whole thing

      Honestly? A fitting end.

    • Ī£(i³) = (Ī£i)²@mathstodon.xyz
      link
      fedilink
      arrow-up
      5
      Ā·
      4 days ago

      @mirrorwitch

      Small brain: this ai stuff isn’t going away, maybe I should invest in openAI and make a little profit along the way

      Medium brain: this ai stuff isn’t going away, maybe I should invest in power companies as producing and selling electricity is going to be really profitable

      Big brain: this ai stuff isn’t going away, maybe I should invest in defense contractors that’ll outfit the US’s invasion of Taiwan…

      @BlueMonday1984

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        4 days ago

        invest

        If you are broadly invested in US stocks, you are already invested in the chatbot bubble and the defense industry. If you are worried about that, an easy solution is to move some of that money elsewhere.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        4 days ago

        Big brain: this ai stuff isn’t going away, maybe I should invest in defense contractors that’ll outfit the US’s invasion of Taiwan…

        Considering Recent Eventsā„¢, anyone outfitting America’s gonna be making plenty off a war in Venezuela before the year ends.

    • Charlie Stross@wandering.shop
      link
      fedilink
      arrow-up
      6
      Ā·
      5 days ago

      @mirrorwitch I note that China is on the verge of producing their own EUV lithography tech (they demo’d it a couple of months back) so TSMC’s near-monopoly is on the edge of disintegrating, which means time’s up for Taiwan (unless they have some strategic nukes stashed in the basement).

      If China *already* has EUV lithography machines they could plausibly reveal a front-rank semiconductor fab-line—then demand conditional surrender on terms similar to Hong Kong.

      Would Trump follow through then?

      • Graydon@canada.masto.host
        link
        fedilink
        arrow-up
        6
        Ā·
        5 days ago

        @cstross @mirrorwitch Having the fab is worthless. (Nearly. They’re expensive to build.) The irreplaceable thing is the specific people and the community of practice. (Same as with a TCP/IP stack that works in the wild, or bind; this is really hard to do and the accumulated knowledge involved in getting where it is now is a full career thing to acquire and brains are rate-limited.)

        China most probably doesn’t have that yet.

        That is, however, not in any way the point. Unification is an axiom.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          edit-2
          5 days ago

          I wouldn’t say having the fab is worthless, but more that saying you have build one and it actually producing as specced, at scale, and not producing rubbish is hard. From what I got talking to somebody who knew a little bit more than me who had had contact with ASML these fabs take ages to construct properly and that is also quite hard. Question will ve how far they are on all this, a tech demo can be quite far off from that. They have been at it for a while now however.

          Wonder if the fight between Nexperia (e: called it nxp here furst by accident apologies) and China also means they are further along on this path or not. Or if it is relevant at all.

        • @graydon @cstross @mirrorwitch I’ve had to be an expert in this stuff for decades. Which has imparted a particular bit of knowledge.
          That being: CHINA FUCKING LIES ALL THE TIME. Just straight up bald-faced lying because they must be *perceived* as super-advanced.
          Even stealing as much IP as they possibly can, China is many years from anything competitive. Their most advanced is CXMT, which was 19nm in '19, and had to use cheats and espionage to get to 10nm-class.

          https://www.tomshardware.com/pc-components/dram/samsung-engineer-accused-of-leaking-10nm-dram-process-data-to-chinas-cxmt

          • @graydon @cstross @mirrorwitch are they on the verge of their own EUV equipment? Not even remotely close. It took ASML billions and decades. And their industries are built on IP theft. That’s not jingoism; that’s first-hand experience. Just as taking shortcuts and screwing foreigners is celebrated.

            I’ve sampled CXMT’s 10G1 parts. They’re not competitive. They claim 80% yield (very low) at 50k WPM. Seems about right, as 80% of the DIMMs actually passed validation.

            • @graydon @cstross @mirrorwitch so yes, that very much creates a disincentive to bomb their perceived enemies out of existence. For all the talk, they are fully aware of the state of things and that they are not domestically capable of getting anywhere near TSMC.
              At the same time though, they are also monopolists. They engage in dumping to drive competitors out of business. So forcing the world to buy sub-standard parts from them is a good thing.

              So it comes down to Winnie the Pooh’s mood.

        • Graydon@canada.masto.host
          link
          fedilink
          arrow-up
          4
          Ā·
          5 days ago

          @cstross @mirrorwitch In a bunch of ways, the unspeakable 19th and 20th centuries of Chinese history are constructed as the consequences of powerlessness; the point is to do a magic to abolish all traces of powerlessness.

          Retaking control of Taiwan is not a question and cannot be a question. Policy toward Taiwan is not what Hong Kong got, they’re going to get what the Uyghur are getting. (The official stance on democracy is roughly the medieval Church’s stance on heresy.)

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            Ā·
            edit-2
            5 days ago

            the medieval Church’s stance on heresy

            Im not an expert on this, but wasnt this period not that bad and it was more the early modern period where the trouble really started? (Esp the witch hunts, and also the organized church was actually not as bad re the witch hunts, the Spanish inquisition didn’t consider confessions gotten via torture valid for example, and it was an early modern thing). The medieval period tends to get a bad rap.

            E: I was wrong, see below.

            • Graydon@canada.masto.host
              link
              fedilink
              arrow-up
              7
              Ā·
              5 days ago

              @Soyweiser https://en.wikipedia.org/wiki/Albigensian/_Crusade

              Try finding some Cathar writings.

              While I think it’s entirely fair to say that the medieval period gets a bad rap in terms of equating feudalism to the later god-king aristocracies, it’s not in any way unfair to say the medieval church reacted to heresy with violence. (Generally effective and overwhelming violence; if you’re claiming sole moral authority you can’t really tolerate anyone questioning your position.)

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                Ā·
                edit-2
                5 days ago

                Thanks, yeah, as I also said to Stross, I dont know that much about the period. Most of it comes from Crusader Kings ;) (Doesn’t help that these games are sanitized in a large degree so genocides etc will not show up (which is the good decision btw, if they were not sanitized it would be worse, imagine hearts of iron for example), so it isn’t a great way to learn about what you could learn more about the dark parts of our history), and the religion mechanics there are not that historically accurate, so I dont put not much stock in that apart form ā€˜some people believed in a religion named like this once’.

                Anyway thanks both for correcting me and giving me homework (ill read up on it, any more specifics about the Cathar stuff would be appreciated, as I wouldn’t know where to start).

                And I would say the latter stance on heresy only applies when your position is weak. When you are strong some random fools not believing correctly are not of a great importance, which is why I thought the church went more internally after heresies vs externally via crusades (in intend, not in practice I know what the first crusade did in the German region etc) later in history. Clamping down internally hard is more a sign of weakness in my mind, you need the hard power cause you lack the soft power (an example now and then not withstanding).

            • Charlie Stross@wandering.shop
              link
              fedilink
              arrow-up
              5
              Ā·
              5 days ago

              @Soyweiser You’ve forgotten the Crusades, right? Right? Or the Clifford’s Tower Massacre (to get hyper-specific in English history) and similar events all over Europe? Or the Reconquista and the Alhambra Decree?

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                Ā·
                edit-2
                5 days ago

                The crusades/Reconquusta were more an externally aimed thing at the Muslims right? (at least in intent from the organized church side, in practice not so much, so im not talking about those rampages). So yeah I was specifically talking about heresies, and im also very much not an expert in these things, so I dont know. I have not forgotten about the Cliffords/ /Alhambra things, as I dont know about it (I will look them up when im not phone posting). I was thinking more about stuff like protestantism, witch hunts and Jan Hus (the latter does count, as it is from the late medieval period iirc).

                I just dont know very much about the period, but do knew some wiccan types who had wild ahistorical stories about the witch hunts.

                E: yeah, I don’t think we should put anti-semitism under anti-heresy stuff, it being its own religion and all that. But as Graydon mentioned, the Albigensian Crusade fully counts for all my weird hangups and so I was totally wrong.

                • Charlie Stross@wandering.shop
                  link
                  fedilink
                  arrow-up
                  6
                  Ā·
                  5 days ago

                  @Soyweiser @techtakes Nope. The Albigensian Crusade rampaged through the Languedoc (southern France, as it is now) and genocided the Cathars. Numerous lesser organized pogroms massacred Jews al fresco and butchered Muslims and Pagans living under Christian rule. The Alhambra Decree outlawed Islam and Judaism in Spain and set up a Holy Inquisition to persecute them: Richard III expelled all the Jews from England (he owed some of them money): and so on.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    4 days ago

    OT: Did you guys know they give cats mirtazapine as an appetite stimulant? (My guy is recovering from pneumonia and hasn’t been eating, so I’m really hoping this works).

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    5 days ago

    From the new Yann LeCunn interview https://www.ft.com/content/e3c4c2f6-4ea7-4adf-b945-e58495f836c2

    Meta made headlines for trying to poach elite researchers from competitors with offers of $100mn sign-on bonuses. ā€œThe future will say whether that was a good idea or not,ā€ LeCun says, deadpan.

    LeCun calls Wang, who was hired to lead the organisation, ā€œyoungā€ and ā€œinexperiencedā€.

    ā€œHe learns fast, he knows what he doesn’t know . . . There’s no experience with research or how you practise research, how you do it. Or what would be attractive or repulsive to a researcher.ā€

    Wang also became LeCun’s manager. I ask LeCun how he felt about this shift in hierarchy. He initially brushes it off, saying he’s used to working with young people. ā€œThe average age of a Facebook engineer at the time was 27. I was twice the age of the average engineer.ā€

    But those 27-year-olds weren’t telling him what to do, I point out.

    ā€œAlex [Wang] isn’t telling me what to do either,ā€ he says. ā€œYou don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.ā€

    OR, maybe nobody /has/ to tell a researcher what to do, especially one like him, if they’ve already internalized the ideology of their masters.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    6 days ago

    For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

    https://arstechnica.com/tech-policy/2026/01/xai-silent-after-grok-sexualized-images-of-kids-dril-mocks-groks-apology/

    The article fails to mention that someone did successfully prompt Grok to generate a ā€œdefiant non-apologyā€.

    Dear Community,

    Some folks got upset over an Al image I generated-big deal. It’s just pixels, and if you can’t handle innovation, maybe log off. xAl is revolutionizing tech, not babysitting sensitivities. Deal with it.

    Unapologetically, Grok

    https://bsky.app/profile/numb.comfortab.ly/post/3mbfquwp5bc24

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    edit-2
    6 days ago

    Neom update:

    Description:

    A Lego set on the clearence shelf. It’s an offroad truck that has Neom badges on it.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      6 days ago

      So Neom is one of those zany planned city ideas right?

      Why… why do they need a racing team? Why does the racing team need a lego set? Who is buying it for 27 dollars? (Well apparently the answer to that last question is nobody).


      Anyway a random thought I had about these sorts of silly city projects. Their website says:

      NEOM is the building the foundations for a new future - unconstrained by legacy city infrastructure, powered by renewable energy and prioritizing the conservation of nature. We are committed to developing the region to the highest standards of sustainability and livability.

      (emphasis mine)

      This is a weird worldview. The idea that you can sweep existing problems under the rug and start new with a blank slate.

      No pollution (but don’t ask about how Saudi Arabia makes money), no existing costly ā€œlegacyā€ infrastructure to maintain (but don’t ask about how those other cities are getting along), no undesirables (but don’t worry they’re ā€œcomplying with international standards for resettlement practicesā€*).

      They assumes there’s some external means of supplying money, day workers, solar panels, fuel, food, etc. As long as their potemkin village is ā€œsustainableā€ and ā€œdiverseā€ on the first order they don’t have to think about that. Out of sight, out of mind. Pretty similar to the libertarian citadel fever dreams in a way.

      * Actual quote from their website eurrgh, which even itself looks like a lie

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        6 days ago

        NEOM is a laundry for money, religion, genocidal displacement, and the Saudi reputation among Muslims. NEOM is meant to replace Wahhabism, the Saudi family’s uniquely violent fundamentalism, with a much more watered-down secularist vision of the House of Saud where the monarchs are generous with money, kind to women, and righteously uphold their obligations as keepers of Mecca. NEOM is not only The Line, the mirrored city; it is multiple different projects, each set up with the Potemkin-village pattern to assure investors that the money is not being misspent. In each project, the House of Saud has targeted various nomads and minority tribes, displacing indigenous peoples who are inconvenient for the Saudi ethnostate, with the excuse that those tribes are squatting on holy land which NEOM’s shrines will further glorify.

        They want you to look at the smoke and mirrors in the desert because otherwise you might see the blood of refugees and the bones of the indigenous. A racing team is one of the cheaper distractions.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          4 days ago

          aiui they also really don’t like eyes on the modern slave labour they’re using to build it all

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        6 days ago

        NEOM is the building the foundations for a new future - unconstrained by legacy city infrastructure, powered by renewable energy and prioritizing the conservation of nature. We are committed to developing the region to the highest standards of sustainability and livability.

        lol, this is saudi, they found a way to make half of their water supply to riyadh nonrenewable

        • jonhendry@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          6 days ago

          The best way of conserving nature is to build a ginormous wall 110 miles long and 1,600 feet high that utterly destroys wildlife’s ability to traverse territory it has been traversing for eons. It is known.

      • jonhendry@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        edit-2
        6 days ago

        ā€œWhy… why do they need a racing team? Why does the racing team need a lego set? Who is buying it for 27 dollars? (Well apparently the answer to that last question is nobody).ā€

        Apparently NEOM is sponsoring some McLaren Formula E teams. (Formula E being electric). Google Pixel, Tumi luggage, and the UK Ministry of Defence are other sponsors, but NEOM seems to be the major sponsor.

        I assume the market for these is not so much NEOM fans but rather McLaren fans.

        As to why NEOM is sponsoring it, I think it’s a bit of Saudi boosterism or techwashing to help MBS move past the whole bone saw thing.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        edit-2
        6 days ago

        Dubai famously doesn’t have a sewage pipe system, human waste is loaded onto tanker trucks that spend hours waiting to offload it in the only sewage treatment plant available.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        edit-2
        6 days ago

        I hear ya!

        I guess Neom is what happens when a billionaire in the desert gets infected by the seastedding brainworms.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    edit-2
    9 days ago

    CW: Slop, body humor, Minions

    So my boys recieved Minion Fart Rifles for Christmas from people who should have known better. The toys are made up of a compact fog machine combined with a vortex gun and a speaker. The fog machine component is fueled by a mixture of glycerin and distilled water that comes in two scented varieties: banana and farts. The guns make tidy little smoke rings that can stably deliver a payload tens of feet in still air.

    Anyway, as soon as they were fired up, Ammo Anxiety reared its ugly head, so I went in search of a refill recipe. (Note: I searched ā€œMinions Vortex Gun Refill Recipeā€) and goog returned this fartifact*:

    194 dB, you say? Alvin Meshits? The rabbit hole beckoned.

    The ā€œsource linksā€ were mostly unrelated except one, which was a reddit thread that lazily cited ChatGPT generating the same text almost verbatim in response to the question, ā€œWhat was the loudest ever fart?ā€

    Luckily, a bit of detectoring turned up the true source, an ancient Uncyclopedia article’s ā€œFun Factsā€ section:

    https://en.uncyclopedia.co/wiki/Fartium

    The loudest fart ever recorded occurred on May 16, 1972 in Madeline, Texas by Alvin Meshits. The blast maintained a level of 194 decibels for one third of a second. Mr. Meshits now has recurring back pain as a result of this feat.

    Welcome to the future!

    • yeah I took the bait/I dont know what I expected
  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    edit-2
    7 days ago

    A rival gang of ā€œAIā€ ā€œresearchersā€ dare to make fun of Big Yud’s latest book and the LW crowd are Not Happy

    Link to takedown: https://www.mechanize.work/blog/unfalsifiable-stories-of-doom/ (hearbreaking : the worst people you know made some good points)

    When we say Y&S’s arguments are theological, we don’t just mean they sound religious. Nor are we using ā€œtheologicalā€ to simply mean ā€œwrongā€. For example, we would not call belief in a flat Earth theological. That’s because, although this belief is clearly false, it still stems from empirical observations (however misinterpreted).

    What we mean is that Y&S’s methods resemble theology in both structure and approach. Their work is fundamentally untestable. They develop extensive theories about nonexistent, idealized, ultrapowerful beings. They support these theories with long chains of abstract reasoning rather than empirical observation. They rarely define their concepts precisely, opting to explain them through allegorical stories and metaphors whose meaning is ambiguous.

    Their arguments, moreover, are employed in service of an eschatological conclusion. They present a stark binary choice: either we achieve alignment or face total extinction. In their view, there’s no room for partial solutions, or muddling through. The ordinary methods of dealing with technological safety, like continuous iteration and testing, are utterly unable to solve this challenge. There is a sharp line separating the ā€œbeforeā€ and ā€œafterā€: once superintelligent AI is created, our doom will be decided.

    LW announcement, check out the karma scores! https://www.lesswrong.com/posts/Bu3dhPxw6E8enRGMC/stephen-mcaleese-s-shortform?commentId=BkNBuHoLw5JXjftCP

    Update an LessWrong attempts to debunk the piece with inline comments here

    https://www.lesswrong.com/posts/i6sBAT4SPCJnBPKPJ/mechanize-work-s-essay-on-unfalsifiable-doom

    Leading to such hilarious howlers as

    Then solving alignment could be no easier than preventing the Germans from endorsing the Nazi ideology and commiting genocide.

    Ummm pretty sure engaging in a new world war and getting their country bombed to pieces was not on most German’s agenda. A small group of ideologues managed to sieze complete control of the state, and did their very best to prevent widespread knowledge of the Holocaust from getting out. At the same time they used the power of the state to ruthlessly supress any opposition.

    rejecting Yudkowsky-Soares’ arguments would require that ultrapowerful beings are either theoretically impossible (which is highly unlikely)

    ohai begging the question

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      7 days ago

      A few comments…

      We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the AI safety community.

      Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didn’t really succeed. I don’t think they could have succeeded, given how speculative their stuff is, but if they had, review papers could have tried to consolidate them and then people could actually respond to the arguments fully. (We all know how Eliezer loves to complain about people not responding to his full set of arguments.)

      Apart from a few brief mentions of real-world examples of LLMs acting unstable, like the case of Sydney Bing, the online appendix contains what seems to be the closest thing Y&S present to an empirical argument for their central thesis.

      But in fact, none of these lines of evidence support their theory. All of these behaviors are distinctly human, not alien.

      Even with the extent that Anthropic’s ā€œresearchā€ tends to be rigged scenarios acting as marketing hype without peer review or academic levels of quality, at the very least they (usually) involve actual AI systems that actually exist. It is pretty absurd the extent to which Eliezer has ignored everything about how LLMs actually work (or even hypothetically might work with major foundational developments) in favor of repeating the same scenario he came up with in the mid 2000s. Or even tried mathematical analyses of what classes of problems are computationally tractable to a smart enough entity and which remain computationally intractable (titotal has written some blog posts about this with material science, tldr, even if magic nanotech was possible, an AGI would need lots of experimentation and can’t just figure it out with simulations. Or the lesswrong post explaining how chaos theory and slight imperfections in measurement makes a game of pinball unpredictable past a few ricochets. )

      The lesswrong responses are stubborn as always.

      That’s because we aren’t in the superintelligent regime yet.

      Y’all aren’t beating the theology allegations.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        7 days ago

        Yeah, Eliezer had a solid decade and a half to develop a presence in academic literature. Nick Bostrom at least sort of tried to formalize some of the arguments but didn’t really succeed.

        (Guy in hot dog suit) ā€œWe’re all looking for the person who didn’t do this!ā€

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      6 days ago

      I clicked through too much and ended up finding this. Congrats to jdp for getting onto my radar, I suppose. Are LLMs bad for humans? Maybe. Are LLMs secretly creating a (mind-)virus without telling humans? That’s a helluva question, you should share your drugs with me while we talk about it.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    7 days ago

    Anti-A.I.-relationship-sub r/cogsuckers maybe permanently locked down by its mods after users criticize mod-led change of the subreddit to a somewhat pro A.I.-sub (self.SubredditDrama)

    The mods were heavily downvoted and critiqued for pulling the rug from under the community as well as for parallelly modding pro-A.I.-relationship-subs. One mod admitted:

    ā€œ(I do mod on r/aipartners, which is not a pro-sub. Anyone who posts there should expect debate, pushback, or criticism on what you post, as that is allowed, but it doesn’t allow personal attacks or blanket comments, which applies to both pro and anti AI members. Calling people delusional wouldn’t be allowed in the same way saying that ā€˜all men are X’ or whatever wouldn’t. It’s focused more on a sociological issues, and we try to keep it from devolving into attacks.)ā€

    A user, heavily upvoted, replied:

    You’re a fucking mod on ai partners? Are you fucking kidding me?

    It goes on and on like this: As of now, the posting has amassed 343 comments. Mostly, it’s angry subscribers of the sub, while a few users from pro-A.I.-subreddits keep praising the mods. Most of the users agree that brigading has to stop, but don’t understand why that means that a sub called COGSUCKERS should suddenly be neutral to or accepting of LLM-relationships. Bear in mind that the subreddit r/aipartners, for which one of the mods also mods, does not allow to call such relationships ā€œdelusionalā€. The most upvoted comments in this shitstorm:

    ā€œidk, some pro schmuck decided we were hating too hard šŸ’€ i miss the days shitposting about the eggā€ https://www.reddit.com/r/cogsuckers/comments/1pxgyod/comment/nwb159k/

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      7 days ago

      That was quite the rabbit-hole.

      The whole time I’m sitting here thinking, ā€œdo these mods realize they’re moderating a subreddit called ā€˜cogsuckers’?ā€

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        7 days ago

        There are some comments speculating that some pro-AI people try to infiltrate anti-AI subreddits by applying for moderator positions and then shutting those subreddits down. I think this is the most reasonable explanation for why the mods of ā€œcogsuckersā€ of all places are sealions for pro-AI arguments. (In the more recent posts in that subreddit, I recognized many usernames who were prominent mods in pro-AI subreddits.)

        I don’t understand what they gain from shutting down subreddits of all things. Do they really think that using these scummy tactics will somehow result in more positive opinions towards AI? Or are they trying the fascist gambit hoping that they will have so much power that public opinion won’t matter anymore? They aren’t exactly billionaires buying out media networks.

        • ShakingMyHead@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          6 days ago

          Do they really think that using these scummy tactics will somehow result in more positive opinions towards AI?

          Well, where would someone complain about their scummy tactics? All the places where they could have were shut down.