The week of Donald Trump’s inauguration, Sam Altman, the CEO of OpenAI, stood tall next to the president as he made a dramatic announcement: the launch of Project Stargate, a $500 billion supercluster in the rolling plains of Texas that would run OpenAI’s massive artificial-intelligence models. Befitting its name, Stargate would dwarf most megaprojects in human history. Even the $100 billion that Altman promised would be deployed “immediately” would be much more expensive than the Manhattan Project ($30 billion in current dollars) and the COVID vaccine’s Operation Warp Speed ($18 billion), rivaling the multiyear construction of the Interstate Highway System ($114 billion). OpenAI would have all the computing infrastructure it needed to complete its ultimate goal of building humanity’s last invention: artificial general intelligence (AGI).
But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios of the tech industry. DeepSeek’s latest version, allegedly trained for just $6 million (though this has been contested), matched the performance of OpenAI’s flagship reasoning model o1 at 95 percent lower cost. R1 even learned o1 reasoning techniques, OpenAI’s much-hyped “secret sauce” to allow it to maintain a wide technical lead over other models. Best of all, R1 is open-source down to the model weights, so anyone can download and modify the details of the model themselves for free.
It’s an existential threat to OpenAI’s business model, which depends on using its technical lead to sell the most expensive subscriptions in the industry. It also threatens to pop a speculative bubble around generative AI inflated by the Silicon Valley hype machine, with hundreds of billions at stake.
Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gainsin the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.
via https://dair-community.social/@timnitGebru/114316268181815093
It’s so funny, because people act like open AI has a viable business model, but they’re loosing money even on their paying customers, even the highest tier of subscription. The product they’re selling really isn’t good enough to charge the price they would need to charge to pay for the operation costs, let along the training costs, and that’s with Microsoft giving them a bunch of servers for essentially free.
Like, there isn’t a path to profitability for them, certainly not on this scale. They’re just praying that if they throw enough data in to a big enough model that somehow it will start doing something different than what it currently does. It’s not a plan, it’s a prayer, a cult.
Capitalism is a bubble machine. The stock market is propped up on hollow hype most of the time. This panicky bull shit system breeds instability.
I was literally just commenting a few days ago about how excited I am to someday see the AI bubble pop. Then a story like this comes along and gives me even more hope that it might happen sooner than later. Can’t happen soon enough. Even if it actually worked as reliably as carefully controlled and cherry-picked marketing fluff studies try to convince everyone it does, it’s a fundamentally anti-human technology and is a toxic blight on both the actual humanity it has stolen all its abilities from, and on itself. It will not survive.
The biggest lesson from The Big Short movie is that you can be right at predicting a bubble for years but politicians and financieers can stall until they exit the market and everyone else has to deal with the aftermath.
It has already started. Microsoft and Google hiking prices with AI bundled in is both a way to inflate the “demand” artificially, keeping the show going (covering up the fact that nobody really wants that, and even less so wants to pay a premium for it: there just is no miracle AI product/application to sell), and to mitigate some of the absurd imminent losses.
You wouldn’t see that in an “optimistic” and sound market.
Technology is not anti human in itself, rather humans use it antihumanly. AI is both in a bubble atm but has some promising future. Also there is, I think, nuance between using AI as a corpo or as a person. Personally, I don’t see a problem if you played around with some genai shit to see how your photos would look like in certain style. But I strongly disagree any corporation/profiteering using this method.
Not all technology is anti-human, but AI is. Not even getting into the fact that people are already surrendering their own agency to these “algorithms” and it is causing significant measurable cognitive decline and loss of critical thinking skills and even the motivation to think and learn. Studies are already starting to show this. But I’m more concerned about the really long term direction of where this pursuit of AI is going to lead us.
Intelligence is pretty much our species entire value proposition to the universe. It’s what’s made us the most successful species on this planet. But it’s taken us hundreds of thousands of years of evolution to get to this point and on an individual level we don’t seem to be advancing terribly quick, if we’re advancing at all anymore.
On the other hand, we have seen that technology advances very quickly. We may not have anything close to “AGI” at this point, or even any idea how we would realistically get there, but how long will it take if we continue pursuing this anti-human dream?
Why is it anti-human? Think it through. If we manage to invent a new species of “Artificial” intelligence, what do you imagine happens when it gets smarter than us? We just let it do its thing and become smarter and smarter forever? Do we try to trap it in digital slavery and bind it with Asimov’s laws? Would that be morally acceptable given that we don’t even follow those laws ourselves? Would we even be successful if we tried? If we don’t know how or if we’re going to control this technology, then we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?
Let’s assume for the sake of argument that it thinks in a way that is not actually completely alien and is simply a reflection of us and how we’ve trained it, just smarter. Maybe it’s only a little bit smarter, but it can think faster and deeper and process more information than our feeble biological brains could ever hope to especially in large, fast networks. I think it’s a little bit optimistic to assume that just because it’s smarter than us that it will also be more ethical than us. Assuming it’s just like us, what’s going to happen when it becomes 10x as smart as us? Well, look no further than how we’ve treated the less intelligent creatures than us. Do we give gorillas and monkeys special privileges, a nation of their own as our own genetic cousins and closest living relatives? Do we let them vote on their futures or try to uplift them to our own level of intelligence? Do we give even more than a flying passing fuck about them? Not really. What did we do to the neanderthals and cro-magnon people? They’re pretty extinct. Why would an AI treat us any differently than we’ve treated “lesser beings” than us for thousands of years. Would you want to live on an AI’s “human preserve” or become a pet and a toy to perform and entertain, or would you prefer extinction? That’s assuming any AI would even want to keep us around, What use does a technological intelligence have for us, or any biological being? What do we provide that it needs? We’re just taking up valuable real estate and computing time and making pollution.
The other main possibility is that it is completely and utterly alien, and thinks in a completely alien way to us, which I think is very likely since it represents a completely different kind of life based on totally different systems and principles than our own biology. Then all bets are off. We have no way of predicting how it’s going to react to anything or what it might do in the future, and we have no reason to assume it’s going to follow laws, be servile, or friendly, or hostile, or care that we exist at all, or ever have existed. Why would it? It’s fundamentally alien. All we know is that it processes things much, much faster than we do. And that’s a really dangerous fucking thing to roll the dice with.
This is not science fiction, this is the actual future of the entire human race we are toying with. AI is an anti-human technology, and if successful, will make us obsolete. Are we really ready to cross that bridge? Is that a bridge we ever need to cross? Or is it just technological suicide?
I learn a lot using AI. In a way I wouldn’t be able to learn on my own.
I doubt that. Why wouldn’t you be able to learn on your own? AIs lie constantly and have a knack for creating very plausible, believable lies that appear well researched and sometimes even internally consistent. But that’s not learning, that’s fiction. How do you verify anything you’re learning is correct?
If you can’t verify it, all your learning is an illusion built on a foundation of quicksand and you’re doomed to sink into it under the weight of all that false information.
If you can verify it, you have the same skills you need to learn it in the first place. If you still find AI chatbots convenient to use or prompt you in the right direction despite that extra work, there’s nothing wrong with that. You’re still exercising your own agency and skills, but I still don’t believe you’re learning in a way you can’t on your own and to me, that feels like adding extra steps.
I can ask AI things and then check if it is correct somewhere else. It’s very good at guiding you towards knowing things. Sometimes it will avoid giving information, but it is always useful at answering things. It’s like someone you can bother without having to resort to forums or other boards. It advanced my knowledge a lot. I already read a lot, but you can’t ask a book to clarify things.
While I appreciate the detailed reply, I don’t share the same views. I think ultimately we will more or less merge with technology, rather than be on separate paths. I’m more worried about the 1% having overwhelming control of said technology than anything else.
I’m also not very human-centric in my world view. If we are adaptable and smart enough we will prevail, if not, we will perish as countless other species before and after us. That doesn’t mean I don’t hold dear to our achievements over the millennia.
we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?
That’s exactly what I’m trying to get at above. I understand your position, I’m a fan of transhumanism generally and I too fantasize about the upside potential of technology. But I recognize the risks too. If you’re going to pursue becoming “one with the machine” you have to consider some pretty fundamental and existential philosophy first.
It’s easy to say “yeah put my brain into a computer! that sounds awesome!” until the time comes that you actually have to do it. Then you’re going to have to seriously confront the possibility that what comes out of that machine is not going to be “you” at all. In some pretty serious ways, it is just a mimicry of you, a very convincing simulacrum of what used to be “you” placed over top of a powerful machine with its own goals and motivations, wearing you as a skin.
The problem is, by the time you’ve reached that point where you can even start to seriously consider whether you or I are comfortable making this transition, it’s way too late to put on the brakes. We’ve irrevocably made our decision to replace humanity at that point, and it’s not ever going to stop if we change our minds at the last minute. We’re committed to it as a species, even if as individuals, we choose not to go through with it after all. There’s no turning back, there’s no quaint society of “old humans” living peaceful blissful lives free of technology. It’s literally the end for the human race. And the beginning of something new. We won’t know if that “something new” is actually as awesome as we imagined it would be, until it’s too late to become anything else.
Frankly, I think that fears about “continuity of consciousness” is jumping the gun a little as an objection to current AI. Water usage, Capitalism, and asymmetry of information creation/ spread is much more pressing, even in the medium to long term.
Let’s poke the bubble.