Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      10 days ago

      The phrase ā€œadorned with academic ornamentationā€ sounds like damning with faint praise, but apparently they just mean it as actual praise, because the rot has reached their brains.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      10 days ago

      The implication that Soares / MIRI were doing serious research before is frankly journalist malpractice. Matteo Wong can go pound sand.

      • TinyTimmyTokyo@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        10 days ago

        It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            Ā·
            9 days ago

            Just earlier this month, he was brushing off all the problems with GPT-5 and saying that ā€œOpenAI is learning from its greatest success.ā€ He wrapped up a whole story with the following:

            At this stage of the AI boom, when every major chatbot is legitimately helpful in numerous ways, benchmarks, science, and rigor feel almost insignificant. What matters is how the chatbot feels—and, in the case of the Google integrations, that it can span your entire digital life. Before OpenAI builds artificial general intelligence—a model that can do basically any knowledge work as well as a human, and the first step, in the company’s narrative, toward overhauling the economy and curing all disease—it is aiming to build an artificial general assistant. This is a model that aims to do everything, fit for a company that wants to be everywhere.

            Weaselly little promptfucker.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        edit-2
        10 days ago

        My copy of ā€œthe singularity is nearā€ also does that btw.

        (E: Still looking to confirm that this isn’t just my copy, or it if is common, but when I’m in a library I never think to look for the book, and I don’t think I have ever seen the book anywhere anyway. It is the ā€˜our sole responsibility…’ quote, no idea which page, but it was early on in the book. ā€˜Yudnowsky’).

        Image and transcript

        Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve…[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from ā€œimpossibleā€ to ā€œobvious.ā€ Move a substantial degree upwards and all of them will become obvious.

        —ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996

        Transcript end.

        How little has changed, he has always believed intelligence is magic. Also lol on the ā€˜smallest bit’. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    13 days ago

    The usual suspects are mad about college hill’s expose of the yud/kelsey piper eugenics sex rp. Or something, I’m in bed and can’t be bothered to link at the moment.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        Ā·
        edit-2
        12 days ago

        Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…

        • the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)

        • the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)

        • the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)

        • the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)

        • the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)

        • and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)

        …At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.

        And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          12 days ago

          obligatory reminder that ā€œdath ilanā€ is misspelled ā€œthailandā€ and I still don’t know why. Working theory is Yud wants to recolonise thailand

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          12 days ago

          That’s about what I was thinking, I’m completely ok with the weird rpg aspect.

          Regarding the second and third point though I’ll admit I thought the whole thing was just yud indulging, I missed that it’s also explicitly meant as rationalist esoterica.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            Ā·
            12 days ago

            also explicitly meant as rationalist esoterica.

            Always a bad sign when people can’t just let a thing be a thing just for enjoyment, but see everything as the ā€˜hustle’ (for lack of a better word). I’m reminded of that dating profile we looked at which showed that 99% what he did was related to AI and AI doomerism, even the parties.

            • scruiser@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              Ā·
              edit-2
              12 days ago

              I actually think ā€œProject Lawfulā€ started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren’t nearly as wordy… one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of ā€œhis author insert gives lectures to an audience of adoring slavesā€ he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn’t bothered writing up in the past decade^ . And that’s why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.

              ^(I think Eliezer’s writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            12 days ago

            I missed that it’s also explicitly meant as rationalist esoterica.

            It turns in that direction about 20ish pages in… and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      13 days ago

      I’m sorry, we finally, officially need to cancel fantasy TTRPGs. If it’s not the implicit racialization of everything, it’s the use of the stat systems as a framework for literally masturbatory eugenics fetishization.

      You all can keep a stripped-down version of Starfinder as a treat. But if I see any more of this, we’re going all the way back to Star Wars d6 and that’s final.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        13 days ago

        To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        12 days ago

        also: The int-maxxing and overinflated ego of it all reminds me of the red mage from 8-bit theater, a webcomic based on final fantasy about the LW (light warriors) that ran from 2001-2010

        E: thinking back on it, reading this webcomic and seeing this character probably in some part inoculated me against people like yud without me knowing

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          12 days ago

          I never read 8bit. I read A Modest Destiny. Wonder how that guy is doing, he always was a bit weird and combative, but when he deleted his blog it was getting very early signs of right wing culture warrior bits (which was ironic considering he burned a us flag).

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            12 days ago

            Never read AMD (and shan’t). The author’s site appears to be live.

            8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source. Dead internet go brrrrr. Otherwise, the creator, Brian Clevinger, appears to have had a long career in comics, and has written many things for Marvel.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                Ā·
                12 days ago

                Ah thanks! On mobile the main page gets redirected to spam, but the site is navigable from the archive.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              Ā·
              edit-2
              12 days ago

              Yeah, but he used to have forums, and then a blog, and then no blog and then a blog again, and then a hidden blog etc. Think Howard has only a few minor credits on some games, he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example. One of his comics was also called ā€˜the atheist, the agnostic and the asshole’ so yeah. The 00’s online comic world was something.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                Ā·
                12 days ago

                has only a few minor credits[…], he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example.

                gosh i’m sure glad that these kinds of people disappeared from the internet /s

      • Jonathan Hendry@iosdev.space
        link
        fedilink
        arrow-up
        6
        Ā·
        edit-2
        13 days ago

        @istewart

        I would simply learn how to keep ā€œgamesā€ and ā€œrealityā€ separate. I actually already know. It helps a lot.

        Racists are gonna racist no matter what. They didn’t need TTRPGs around to give them the idea of breaking out the calipers.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          12 days ago

          Yes but basic dnd does have a lot of racism build in, esp with Gygax not being great on that end (nits make lice he said about how it lawful for paladins to kill orc babies). They did drop the sexism pretty quickly, but no big suprise his daughters were not into it. It certainly helps with the whole hierarchical mindset. My int/level is higher than yours so im better than you stuff. And sadly a lot of people do have trouble keeping both seperate (and even that isn’t always ideal, esp in larps).

          But yes this, considering the context ks def a bit of a case of some of their ideologies, or ideological fantasies bleeding through. (Esp considering, Yud has been corrected on his faulty understanding of genetics before).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      13 days ago

      We’ve definitely sneered at this before, i do not recall if it was known that KP was the cowriter in this weird forum RP fic

      E: googling ā€œlintamande kelsey piperā€ and looking at a reddit post digs up the inactive since 2018 AO3. A total just shy of 130k words, a little marvel stuff, most of it LOTR based, and some of it tagged ā€œVladmir Putin/Sauronā€. How fun!

      No judgement from me, tbh. Fanfic be fanficking. I aint gonna read that shit tho.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          edit-2
          13 days ago

          Not sure if anybody noticed the last time, but so they get isekayed into a DND world, which famously runs on some weird form of fantasy feudalism and they expect a random high int person to rule the country somehow? What in the primogenitor is this stuff, you can’t just think yourself into being a king, that is one of the issues with monarchies.

          E: ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            Ā·
            13 days ago

            ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.

            Hey, write what you know

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          edit-2
          13 days ago

          An encounter of this sort is what drove Lord Vetinari to make a scorpion pit for mimes, probably.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        12 days ago

        For all of the 2.2 seconds I have spent wondering who Yud’s coauthor on that was, I vaguely thought that it was Aella. I don’t know where I might have gotten that impression from. A student paper about fanfiction identified ā€œlintamandeā€ as Kelsey Piper in 2013.

        I tried reading the forum roleplay thing when it came up here, and I caromed off within a page. I made it through this:

        The soap-bubble forcefield thing looks deliberate.

        And I got to about here:

        Mad Investor Chaos heads off, at a brisk heat-generating stride, in the direction of the smoke. It preserves optionality between targeting the possible building and targeting the force-bubble nearby.

        … before the ā€œwhat the fuck is this fucking shit?ā€ intensified beyond my ability to care.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          12 days ago

          Yeah I couldn’t find the strength to even get to the naughty stuff, I gave up after one or two chapters. And I’ve read through all of HPMOR. 😐

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            Ā·
            12 days ago

            I’m hard-pressed to think of anything else I have tried to read that was comparably impenetrable. At least when we played ā€œexquisite corpseā€ parlor games on the high-school literary magazine staff, we didn’t pretend that anything we improvised had lasting value.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    11 days ago

    Gary asks the doomers, are you ā€œfeeling the agiā€ now kids?

    To which Daniel K, our favorite guru lets us know that he has officially moved his goal posts updated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.

    It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul

    Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. ā€œThe forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trustā€

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      12 days ago

      Clown world.

      How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They’ll keep swallowing his garbage until the bubble finally bursts.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        11 days ago

        And once it does they’ll quietly stop talking about it for a while to ā€œfocus on the human stories of those affectedā€ or whatever until the nostalgic retrospectives can start along with the next thing.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      11 days ago

      So, as I have been on a cult comparison kick lately, how did it work for those doomsday cults when the world didn’t end, and they picked a new date, did they become more radicalized or less? (I’m not sure myself, I’d assume it would be the people disappointed leave, and the rest get worse).

      E: ah: https://slate.com/technology/2011/05/apocalypse-2011-what-happens-to-a-doomsday-cult-when-the-world-doesn-t-end.html

      … prophecies, per se, almost never fail. They are instead component parts of a complex and interwoven belief system which tends to be very resilient to challenge from outsiders. While the rest of us might focus on the accuracy of an isolated claim as a test of a group’s legitimacy, those who are part of that group—and already accept its whole theology—may not be troubled by what seems to them like a minor mismatch. A few people might abandon the group, typically the newest or least-committed adherents, but the vast majority experience little cognitive dissonance and so make only minor adjustments to their beliefs. They carry on, often feeling more spiritually enriched as a result.

      • nfultz@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        10 days ago

        When Prophecy Fails is worth the read just for the narrative, he literally had his grad students join a UFO / Dianetics cult and take notes in the bathroom and kept it going for months. Really impressive amount of shoe leather compared to most modern psych research.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      8 days ago

      Oh, looks like gemini is a fan of the hacky anti-comedy bits from some of my favorite podcasts

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    14 days ago

    got sent this image

    wonder how many more of these things we’ll see before people start having a real bileful response to this (over and above the fact that a number of people have been warning about exactly this outcome for a while now)

    (transcript below)

    transcript

    title: I gave my mom’s company an Al automation and now she and her coworkers are unemployed

    body: So this is eating me alive and I don’t really know where else to put it. I run this little agency that builds these Al agents for staffing firms. Basically the agent pre-screens candidates, pulls the info into a neat report, and sends it back so recruiters don’t waste hours on screening calls. It’s supposed to be a tool, not a replacement.

    My mom works at this mid sized recruiting company. She’s always complained about how long it takes to qualify candidates, so I set them up with one of my agents just to test it. It crushed it. Way faster, way cheaper, and honestly more consistent than most of their team.

    Fast forward two months and they’ve quietly laid off almost her whole department. Including my mom. I feel sick. Like I built something that was supposed to help people, and instead it wiped out my mom’s job and her team. I keep replaying it in my head like I basically automated my own family out of work.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        13 days ago

        It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        14 days ago

        I didn’t dig into the post/username at all so I can’t guesstimate likelihood of this! get where you’re coming from

        (…I really need to finish my blog relaunch (this thought brought to you by the explication I was about to embark on in this context))

        (((it’s soon.gif tho!)))

    • Alex@lemmy.vg
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      14 days ago

      Gonna have to agree with zogwarg here. I checked out the Reddit profile and they’re a self-proclaimed entrepreneur whose one-man ā€œagencyā€ has zero clients and yet to even have an idea, attempting to crowdsource the latter on r/entrepreneur.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        13 days ago

        dude has a post named ā€œfrom 0 to 1 clients in 48hā€ where someone calls him out for already claiming to have 17 customers, so it’s reasonable to assume that this guy is full of shit either way

        then again, there’s plenty of clueless, could be real, because welcome to current year, where everything is fake, satire is dead and reuters puts the onion out of the business

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    10 days ago

    Meanwhile on /r/programmingcirclejerk sneering hn:

    transcription

    OP: We keep talking about ā€œAI replacing coders,ā€ but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.

    Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      9 days ago

      Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        13 days ago

        And how it fused Buddhism with more Christian religions. Considering how often you heard of old hackers being interested in the former.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        edit-2
        13 days ago

        aum recruited a lot of people, and also failed at some things that would be presumably easier to do safely than what they did

        Meanwhile, Aum had also attempted to manufacture 1,000 assault rifles, but only completed one.[37]

        otoh they were also straight up delusional about what they could achieve, including toying with the idea of manufacturing nukes, military gas lasers, and getting and launching Proton rocket. (not exactly grounded for a group of people who couldn’t make AK-74s)

        they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          13 days ago

          they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)

          Its pretty telling about Our Good Friends’ media savviness that it took an all-consuming AI bubble and plenty of help from friends in high places to break into the mainstream.

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            edit-2
            13 days ago

            With all that money sloshing around, It’s only a matter of time before they start cribbing from their neighbors and we get an anime adaptation of HPMoR.

          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            edit-2
            13 days ago

            radio transmissions in russia were money shot for aum, and idk if it was a fluke or deliberate strategy. people had for a long time expectation that radio and tv are authoritative, reliable sources (due to censorship that doubled as fact-checker, and about all of it was state-owned) and in 90s every bit of that broke down because of privatization, and now you could get on the air and say anything, with many taking that at face value, as long as you pay up. at the same time there was major economic crisis and cults prey on the desperate. result?

            Following the sarin gas attack on the Tokyo subway, two Russian Duma committees began investigations of the Aum – the Committee on Religious Matters and the Committee on Security Matters. A report from the Security Committee states that the Aum’s followers numbered 35,000, with up to 55,000 laymen visiting the sect’s seminars sporadically. This contrasts sharply with the numbers in Japan which are 18,000 and 35,000 respectively. The Security Committee report also states that the Russian sect had 5,500 full-time monks who lived in Aum accommodations, usually housing donated by Aum followers. Russian Aum officials, themselves, claim that over 300 people a day attended services in Moscow. The official Russian Duma investigation into the Aum described the cult as a closed, centralized organization.

            https://irp.fas.org/congress/1995_rpt/aum/part06.htm

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      13 days ago

      aum:

      Advertising and recruitment activities, dubbed the ā€œAum Salvation planā€, included claims of […] realizing life goals by improving intelligence and positive thinking, and concentrating on what was important at the expense of leisure.

      this is in common with both our very good friends and scientology, but i think happy science is much stupider and more in line with srinivasan’s network states, in that it has/is an explicitly far-right political organization built in from day one

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    12 days ago

    A story in two Skeets - one from a TV writer, one from a software dev:

    On a personal sidenote, part of me suspects the AI bubble is gonna turn tech as a whole into a pop-culture punchline - the bubble’s all-consuming nature and wide-ranging harms, plus the industry’s relentless hype campaign, have already built a heavy amount of resentment against the industry, and the general public is gonna experience a colossal amount of schadenfreude once it bursts,

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      11 days ago

      Oh, man, I have opinions about the people in this story. But for now I’ll just comment on this bit:

      Note that before this incident, the Malaney-Weinstein work received little attention due to its limited significance and impact. Despite this, Weinstein has suggested that it is worthy of a Nobel prize and claimed (with the support of Brian Keating) that it is ā€œthe most deep insight in mathematical economics of the last 25-50 yearsā€. In that same podcast episode, Weinstein also makes the incendiary claim that Juan Maldacena stole such ideas from him and his wife.

      The thing is, you can go and look up what Maldacena said about gauge theory and economics. He very obviously saw an article in the widely-read American Journal of Physics, which points back to prior work by K. N. Ilinski and others. And this thread goes back at least to a 1994 paper by Lane Hughston, i.e., years before Pia Malaney’s PhD thesis. I’ve read both; Hughston’s is more detailed and more clear.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        edit-2
        10 days ago

        DRAMATIS PERSONAE

        • nightsky@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          9 days ago

          I once randomly found Hossenfelder’s YT channel, it had a video about climate change and someone linked it somewhere, I didn’t know who she was. That video seemed fine, it correctly pointed out the urgency of the matter, and while I don’t know enough climate science to say much about the veracity of all its content, nothing stuck out as particularly weird to me. So I looked at some other videos from the channel… and boooooy did I quickly discover some serious conspiracy-style nonsense stuff. Real ā€œthe cabal of physicists are suppressing the truthā€ vibes, including ā€œI got this email which I will read to you but I can’t tell you who it’s from, but it’s the ultimate proofā€ (both not quotes, just how I’d summarize the content…)

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            edit-2
            9 days ago

            Longtime friends of the pod will recognize the trick of turning molehills into mountains. Creationists take a legitimate debate over a detail, like how many millions of years ago did species A and species B diverge, and they blow it up into ā€œevolution is wrongā€. Hossenfelder and her ilk do the same thing. They start with ā€œpre-publication peer review has limited effectivenessā€ or ā€œthe allocation of funding is sometimes susceptible to fadsā€, and they blow it up into ā€œphysicists are a cabal out to suppress The Truthā€.

            One nugget of fact that Hossenfelder in particular exploits is that the specific way we have been investigating the corner of physics we like to call ā€œfundamentalā€ is, possibly, arguably, maybe tapped out. The same poster of sub-sub-atomic particles that you’d have put on your wall 30 or 40 years ago is still good today, with an edit or two in the corner. We found the top quark, we found the Higgs, and so, possibly, arguably, maybe, building an even bigger CERN machine isn’t a worthwhile priority right now. Does this spell doom for physics? No, having to reorganize how we do things in one corner of our subject after decades of astonishing success is not ā€œdoomā€.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      11 days ago

      Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.

      Quote from this post:

      I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word ā€œunderstandā€. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.

      Based on this I’d say the author is LLM-pilled at least.

      However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.

      Best case scenario is that the author comes around to the stochastic parrot model of LLMs.

      E: also from that post, rearranged slightly for readability here. (the […]* parts are swapped in the original)

      My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. […]* I was perpetually in defense mode and received none of the applause that the others did.

      So also author is tech-brained and not ā€œtech-fearfulā€.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    9 days ago

    Elon Musk wants to use imaginary future chatbot technology to brainwash (white) people into turning their vaginas into clown cars

    ā€œAI is obviously gonna one-shot the human limbic system,ā€ referring to the part of the brain responsible for human emotions. ā€œThat said, I predict — counter-intuitively — that it will increase the birth rate!ā€ he continued without explanation. ā€œMark my words. Also, we’re gonna program it that way.ā€

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      12 days ago

      Well if the bubble pops he will have to pivot to people who pivot. (That is what is going to suck when to bubble pops, so many people are going to lose their jobs, and I fear a lot of people holding the bags are not the ones who really should be punished the mosts (really hope not a lot of pension funds bought in). The stock market was a mistake).

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        12 days ago

        I imagine it’ll be a pretty lucrative pivot - the public’s ravenous to see AI bros and hypesters get humiliated, and Zitron can provide that in spades.

        Plus, he’ll have a major headstart on whatever bubble the hucksters attempt to inflate next.

          • BlueMonday1984@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            12 days ago

            Y’know, I was predicting at least a few years without a tech bubble, but I guess I was dead wrong on that. Part of me suspects the hucksters are gonna fail to inflate a quantum bubble this time around, though.

            • blakestacey@awful.systems
              link
              fedilink
              English
              arrow-up
              9
              Ā·
              12 days ago

              Quantum computing is still too far out from having even a niche industrial application, let alone something you can sell to middle managers the world over. Anybody who day-traded could get into Bitcoin; millions of people can type questions at a chatbot. Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.

              • BlueMonday1984@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                6
                Ā·
                12 days ago

                Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.

                By my guess, no. AI earned its investor/VC dollars by providing bosses and CEOs alike a cudgel to use against labour, either by deskilling workers, degrading their work conditions, or killing their jobs outright.

                Quantum doesn’t really have that - the only Big Claimā„¢ I know it has going for it is its supposed ability to break pre-existing encryption clean in half, but that’s near-certainly gonna be useless for hypebuilding.

                • Soyweiser@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  Ā·
                  edit-2
                  11 days ago

                  I think they will just start to make up capabilities, also with the added capabilities of quantum of a computing paradigm, AGI is back on the menu. Now, due to quantum without all the expensive datacenters and problems. We are gonna put quantum in glasses! VR/Augmented reality quantum AI glasses!

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      8 days ago

      Every task you outsource to a machine is a task that you don’t learn how to do.

      And school is THE PLACE WHERE YOU ARE SUPPOSED TO LEARN THINGS, JESUS H. FUCK