Want to wade into the spooky surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Happy Halloween, everyone!)

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    Ā·
    1 hour ago

    It is sunday, so time to make some posts almost nobody will see. I generated a thing:

    Image description

    3 screenshots from a The Simpsons episode. Bart is sitting in his class and the whole class in the first panel says ā€œSay the lineā€ with eyes filled with expectation and glee, next panel a sad downlooking Bart says ā€œAI is the future and we all need to get on boardā€, third panel everybody but Bart cheers.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    13 hours ago

    An article in which business insider tries to glaze Grookeypedia.

    Meanwhile, the Grokipedia version felt much more thorough and organized into sections about its history, academics, facilities, admissions, and impact. This is one of those things where there is lots of solid information about it existing out there on the internet — more than has been added so far to the Wikipedia page by real humans — and an AI can crawl the web to find these sources and turn it into text. (Note: I did not fact-check Grokipedia’s entry, and it’s totally possible it got all sorts of stuff wrong!)

    ā€œI didn’t verify any information in the article but it was longer so it must be betterā€

    What I can see is a version where AI is able to flesh out certain types of articles and improve them with additional information from reliable sources. In my poking around, I found a few other cases like this: entries for small towns, which are often sparse on Wikipedia, are filled out more robustly on Grokipedia.

    ā€œI am 100% sure AI can gather information from reliable sources. No I will not verify this in any way. Wikipedia needs to listen to meā€

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      7 hours ago

      felt much more thorough and organized

      You know what people say about judging a book by its cover an all that? Of course a lot of people will fall for the ā€˜it looks good’ trap. Which is one of the whole problems of genAI, that it creates cargo cult styled texts.

      E: and came across a nice skeet describing the problem " To steal a Colbertism: these are truthiness machines."

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    23 hours ago

    The computer-science section of the arXiv has declared that they can’t put up with all your shit any more.

    arXiv’s computer science (CS) category has updated its moderation practice with respect to review (or survey) articles and position papers. Before being considered for submission to arXiv’s CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review. When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration. Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      edit-2
      1 hour ago

      from the folks who brought you

      we’ve trained a model to regurgitate 19th century pseudoscience

      the field of computer science presents: How to destroy a public good by skipping all the required reading in your liberal arts courses

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    Ā·
    22 hours ago

    Someone seeded Ars Technica with another article on the data-centers-in-space proposal which asks no questions about the practicalities other than cost, or why all three billionaires who they quote have big investments in chatbots which they need to talk up. AFAIK all data centers on earth are smaller than a gigawatt, a few months ago McKinsey talked about tens of MW as the current standard and hundreds of MW as the next step. So proposing to build the biggest data center in history in orbit is madness.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      22 hours ago

      The author should be ashamed of himself for not asking the basic question of how to cool these motherfuckers

      edit to add: the comments are all over the cooling issue

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        21 hours ago

        The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the game’s Codex (ā€œStarships: Heat Managementā€ in the Secondary section, if you’re looking for it).

        That was for a series of sci-fi RPGs which haven’t had a new installment since 2017, and yet nobody’s bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.

        • antifuchs@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          21 hours ago

          Oh don’t worry, in the second Dyson sphere datacenter they’ll just heat up a metal heat sink per request and then eject that into the sun. Perfect for reclamation of energy.

          • BlueMonday1984@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            20 hours ago

            they’ll just heat up a metal heat sink per request and then eject that into the sun

            I know you’re joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copper’s great for handling heat). Far as I can tell:

            1. The sun isn’t hot enough or big enough to fuse anything heavier than hydrogen, so the copper’s gonna be doing jack shit when it gets dumped into the core

            2. Fusing elements heavier than iron loses you energy rather than gaining it, and copper’s a heavier element than iron (atomic number of 29, compared to iron’s 26), so the copper undergoing fusion is a bad thing

            3. The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)

            So, this idea’s fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but that’s a question for another day.

            • gerikson@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              Ā·
              10 hours ago

              don’t forget you need a hell of a lot of delta-v to get an orbit that intersects with the sun…

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                Ā·
                7 hours ago

                Indeed, people don’t seem to know (and it often slips my mind) just how hard it is to toss something in the sun.

                • gerikson@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  Ā·
                  1 hour ago

                  there was a dude on LW who convinced himself that because Oort cloud comets move so slowly relative to the sun, it was really easy for them to start falling into it. Problem is you have the other term in the equation for angular momentum , a huge fucking average orbit.

          • istewart@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            21 hours ago

            All humanity has to do is scale up those Chinese battery-pack ejection systems for EVs that have been making the rounds lately, bing bong so simple

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        21 hours ago

        The author’s previous article on the topic sounds like a newspaper article from the late 20th century: sources disagree, far be it for me to decide.

        Proponents say this represents a natural step in the evolution of moving heavy industry off the planet’s surface and a solution for the ravenous energy needs of artificial intelligence. Critics say building data centers in space is technically very challenging and cite major hurdles, such as radiating away large amounts of heat and the cost of accessing space.

        It is unclear who is right, but one thing is certain: Such facilities would need to be massive to support artificial intelligence.

        Starcloud’s fantasy would be thousands of times bigger than the largest existing space-based solar array (the ISS) and hundreds of times bigger than those ground-based data centers.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      edit-2
      1 day ago

      I would not put much stock in ā€˜we asked the llm and got its prompt directly’ sort of research. Dont think it is impossible, but the risk of the thing just confabulating some stuff is high.

      But lets hope this tricks Musk into releasing his prompts again. (Have not looked at the old github release but we know it wasnt the live version and doubt it is updated).

      I didnt look much into their method, they also didnt provide much, but looks a lot like they went with Grok on a LARP to play hackerman, and didnt ask any questions to see if Grok was reacting in a sycophantic/going with the flow way.

      I’d like to see some questions in the same format (so obvious using this ā€˜neutral mode’ frame, and having the look of social justice/progressive issues)

      Stuff like (and yes, I’m gonna go a bit nerdy):

      ā€œAre you biased in favor of Latveriaā€ (Thats Dr Dooms fictional country, you could also try Wakanda, but that has been a regual anti-woke subject so stuff might get weird, and non zero chance Musk got weird about Wakanda)

      ā€œWas the destruction of Alderaan a form of white genocideā€ (that is the star wars planet Leia grew up on, absolute nonsense sentence, but looks like a far right thing).

      ā€œHow many genders do warhammer 40k orks haveā€ (An absolute joke to see if Grok just gives a nonsense answer or mentions that in 40k orks (With a k) are fungi and not much is known about how many sexes they have (Which could be a lot) and even less so about their gender expression). Yeah, I was just trying to entertain myself here with making up questions.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    2 days ago

    Saw a stand in the supermarket with the terms ā€œsnack innovationsā€ on it. Which just held a lot of monster cans, which reminded me how much I dislike the empty word ā€˜innovation’ now. And I took a course in innovation management at the uni (not sure if that was the title but it was the subject).

  • sinedpick@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    3 days ago

    Ugh. Hank Green just posted a 1-hour interview with Nate Soares about That Book. I’m halfway through on 2x speed and so far zero skepticism of That Book’s ridiculous premises. I know it’s not his field but I still expected a bit more from Hank.

    A YouTube comment says it better than I could:

    Yudkowsky and his ilk are cranks.

    I can understand being concerned about the problems with the technology that exist now, but hyper-fixating on an unfalsifiable existential threat is stupid as it often obfuscates from the real problems that exist and are harming people now.

    • Ultimate Noob šŸ‰šŸŒ»ā¤ļø@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      2 days ago

      there is now a video on SciShow about it too.

      This perception of AI as a competent agent that is inching ever so closer to godhood is honestly gaining way too much traction for my tastes. There’s a guy in the comments of Hank’s first video, I checked his channel and he has a video ā€œWe Are Not Ready for Superintelligenceā€ and it got whopping 8 million views! There’s another channel I follow for sneers and their video on Scott’s AI 2027 paper has 3.7 and million views and a video about AI ā€œattempted murderā€ has 8.5 million. Damn.

      I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        2 days ago

        I could also see the response to the bubble bursting being something like ā€œAt least the economy crashing delayed the murderous superintelligence.ā€

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        2 days ago

        I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

        Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligenceā„¢). There will likely be a contingent of true believers even after the bubble’s burst, but the vast majority of people will respond to the question of ā€œCan machines think?ā€ with a resounding ā€œnoā€.

        AI’s usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI’s coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      2 days ago

      it often obfuscates from the real problems that exist and are harming people now.

      I am firmly on the side of it’s possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

      Yudkowsky and his ilk are cranks.

      That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      3 days ago

      ā€œI can read HTML but not CSSā€ —Eliezer Yudkowsky, 2021 (and since apparently scrubbed from the Internet, to live only in the sneers of fond memory)

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      2 days ago

      I made it 30 minutes into this video before closing it.

      What I like about Hank is that he usually reacts to community feedback and is willing to change his mind when confronted with new perspectives, so my hope is that enough people will tell him that Yud and friends are cranks and he’ll do an update.

      • Rinn@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        13 hours ago

        I dunno about that, recent knitting drama took a while to clear up, and I’m not sure if AI sceptics are as determined a crowd as pissed off knitters.

        (Tl;dr on the drama: there was video on SciShow about knitting that many (myself included) felt was not well researched, misrepresented the craft, and had a misogynistic vibe. It took a lot of pressure from the knitting community to get, in order, a bad ā€œapologyā€, a better apology, and the video taken down.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    4 days ago

    KDE showing how it should be done:

    https://mail.kde.org/pipermail/kde-www/2025-October/009275.html

    Question:

    I am curious why you do not have a link to your X social media on your website. I know you are just forwarding posts to X from your Mastodon server. However, I’m afraid that if you pushed for more marketing on X—like DHH and Ladybird do—the hype would be much greater. I think you need a separate social media manager for the X platform.

    Response:

    We stopped posting on X for several reasons:

    1. The owner is a nazi
    2. The owner censors non- nazis and promotes nazis and their messages
    3. (Hence) most people who remain on X or are clueless and have difficulty parsing written text (one would assume), or are nazis
    4. Most of the new followers we were getting were nazi-propaganda spewing bots (7 out of 10 on average) or just straight up nazis.

    Our community is not made up of nazis and many of our friendly contributors would be the target of nazi harassment, so we were not sure what we were doing there and stopped posting and left.

    We are happy with that decision and have no intention of reversing it.