The week of Donald Trump’s inauguration, Sam Altman, the CEO of OpenAI, stood tall next to the president as he made a dramatic announcement: the launch of Project Stargate, a $500 billion supercluster in the rolling plains of Texas that would run OpenAI’s massive artificial-intelligence models. Befitting its name, Stargate would dwarf most megaprojects in human history. Even the $100 billion that Altman promised would be deployed “immediately” would be much more expensive than the Manhattan Project ($30 billion in current dollars) and the COVID vaccine’s Operation Warp Speed ($18 billion), rivaling the multiyear construction of the Interstate Highway System ($114 billion). OpenAI would have all the computing infrastructure it needed to complete its ultimate goal of building humanity’s last invention: artificial general intelligence (AGI).

But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios of the tech industry. DeepSeek’s latest version, allegedly trained for just $6 million (though this has been contested), matched the performance of OpenAI’s flagship reasoning model o1 at 95 percent lower cost. R1 even learned o1 reasoning techniques, OpenAI’s much-hyped “secret sauce” to allow it to maintain a wide technical lead over other models. Best of all, R1 is open-source down to the model weights, so anyone can download and modify the details of the model themselves for free.

It’s an existential threat to OpenAI’s business model, which depends on using its technical lead to sell the most expensive subscriptions in the industry. It also threatens to pop a speculative bubble around generative AI inflated by the Silicon Valley hype machine, with hundreds of billions at stake.

Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gainsin the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.

via https://dair-community.social/@timnitGebru/114316268181815093

  • balssh@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    While I appreciate the detailed reply, I don’t share the same views. I think ultimately we will more or less merge with technology, rather than be on separate paths. I’m more worried about the 1% having overwhelming control of said technology than anything else.

    I’m also not very human-centric in my world view. If we are adaptable and smart enough we will prevail, if not, we will perish as countless other species before and after us. That doesn’t mean I don’t hold dear to our achievements over the millennia.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      4 days ago

      we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

      That’s exactly what I’m trying to get at above. I understand your position, I’m a fan of transhumanism generally and I too fantasize about the upside potential of technology. But I recognize the risks too. If you’re going to pursue becoming “one with the machine” you have to consider some pretty fundamental and existential philosophy first.

      It’s easy to say “yeah put my brain into a computer! that sounds awesome!” until the time comes that you actually have to do it. Then you’re going to have to seriously confront the possibility that what comes out of that machine is not going to be “you” at all. In some pretty serious ways, it is just a mimicry of you, a very convincing simulacrum of what used to be “you” placed over top of a powerful machine with its own goals and motivations, wearing you as a skin.

      The problem is, by the time you’ve reached that point where you can even start to seriously consider whether you or I are comfortable making this transition, it’s way too late to put on the brakes. We’ve irrevocably made our decision to replace humanity at that point, and it’s not ever going to stop if we change our minds at the last minute. We’re committed to it as a species, even if as individuals, we choose not to go through with it after all. There’s no turning back, there’s no quaint society of “old humans” living peaceful blissful lives free of technology. It’s literally the end for the human race. And the beginning of something new. We won’t know if that “something new” is actually as awesome as we imagined it would be, until it’s too late to become anything else.

      • t3rmit3@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        Frankly, I think that fears about “continuity of consciousness” is jumping the gun a little as an objection to current AI. Water usage, Capitalism, and asymmetry of information creation/ spread is much more pressing, even in the medium to long term.