

I mean, isn’t this what Gacha games basically are?


I mean, isn’t this what Gacha games basically are?


Just when I think California couldn’t possibly come up with dumber laws, they deliver yet again.
There’s genuine concerns they could be addressing but instead go after something that’s going to be near impossible for them to enforce.
Blueprints for homemade 3D printers exist that can be built with a pretty short list of parts from Digikey.


One can only wonder as to why that may be.


Well I can say technology stack cost is a consideration, but it isn’t a leading consideration. Even in the scope of AI, the machine cost isn’t the primary factor. The single biggest cost for development is scope and complexities. This is the thing AI is looking to address, ease complexity of projects. And I don’t say that as a promotion for AI, just, that’s what the advertisement is all about for the stuff.
The largest cost is things just being way too complex and right behind that is hiring. I’ve known companies to let go of staff too soon and it’s cost them when they need new talent back in. It’s cheaper in the long run to just give raises to people who’ve been there than fire them and look for new young blood. I usually say this to C-staff as. We don’t need a lot of energy to keep a rock shaped as a wheel rolling, but if you stop it and it falls flat, you’re going to paying to upright that stone. Sometimes that is what is required, sometimes that is not the case. But it’s a choice no one should be making lightly without a serious consideration of what’s to come.
Third party integrations are also a big cost and again, that’s something AI is “supposed to” help with. And it’s kind of the same reason as the first. Complexity. And that’s the big thing you should keep in mind when normal people talk about AI. Complexity. It’ll help you to understand kind of both sides of the AI debate. We are asking for more, faster, with more complex interactions and a team trying to keep on top of all of that is … really hard to say the least. You know we’ve come up with all kinds of “solutions” to solve that complexity, which I won’t go into but things like CI, agile development, etc are examples of that. And ask anyone, quote/unquote agile is a … complicated topic to broach. It absolutely has it’s fans and detractors, there’s no one universal consensus.
But as AI eats all the hardware in existence apparently, there’s a tipping point where even if it ate all of it, that consumption cuts off people from getting to the product itself. At some point people will need a device to access AI for AI to be useful. We don’t have grocery stores located in a remote location far from everyone in a forest for a reason.
The more likely thing is that AI carves out the upper end of the industry. Think for a second. The nVidia 5090 Blackwell architecture surpasses the H100/H200 Blackwell architecture in AI performance. That 5090 is consumer grade. It’s wild to think about that the consumer market has the room for such a massively powerful device. But I feel that’s going to change. Let’s imagine the nVidia 6090! A hypothetical next-gen GPU. In the AI world, the 6090 isn’t marketed toward consumers, it’s a enterprise device. Just like you wouldn’t run into some average gamer running a Threadripper CPU, that’s HPC territory. Instead the 6090 becomes consumer grade after five years, and it’s the top of the line consumer GPU when it hits. The reality is that they’re just putting the 6090s that they couldn’t sell into consumer friendly boxes.
We have to understand that consumer grade has encompassed a LOT. And yes, we are having a massive knee-jerk to that right now. I say that the over reaction that’s playing out is going to hurt the “AI bubble” more than it helps. We’ve yet to see if AI companies have “bit off more than they can chew”. If these chips are produced and the AI companies miss a payment, that’s one thing. But if they keep on missing payments for hardware delivered, it will topple the whole effing thing that we’ll be replacing Humpty Dumpty with Sam Altman in children’s songs. Not that they went bankrupt or anything, but we would see a surge of GPUs hitting the market causing prices to plummet faster than a Boeing aircraft. The sheer ripple would begin ripping apart start ups, slowly moving it’s way up to the end points that provide AI services, and it would just keep going like a snowball down a Swiss ski resort’s freshly tended to snow.
AI companies would indeed have leverage, but that’s a double sided sword to swing. They can do a lot more harm to themselves than good. We just don’t know right now because this is all new. And yeah, our monkey brains hate it when we don’t know something, but that’s really all that can be said at this point. We just don’t know, the interwoven chart everyone panders around where A is buy B, selling to C, who sells to A and buys from B, etc, etc, etc usually ends badly. But this situation has a lot of “unique”… “investors”, aka, there’s a shit ton of corruption Governments around the world are overlooking. But there’s still very greedy people betting against them all because if their bet pays out, they’ll be billionaires overnight.


But the alternative costs from the other side are creeping up. If you are a company looking to hire developers to write software, you need to provide development machines to those developers. A development machine that might have cost $2000 a couple years ago is well on the way to $6000-$7000 in the near future.
This isn’t true because development doesn’t have to be on a max powered system. Myself, I work on AS400 stuff. The compiler is on the remote machine. The database is on the remote machine. The JVM is on the remote machine. All I need is an editor and the ability to SSH into the machine, that’s all that’s required for development. SQL queries? I do those in a web tab, the query is ran on the remote machine.
I could easily write all the code, debugging, SQL queries, ORM, the Java stuff, node.js stuff (Yes, AS400 has node.js and it works pretty well with COBOL objects), and so on, on a Raspberry Pi if I needed to. And this might surprise people and then there will be people who this won’t surprise. Because waaaaaaaaaaaayyy back, that’s how it actually worked. You had a terminal and the machine you wrote code for and debugged and what not was in the basement and you were talking to it via some twinax. We’ve done way more development on a machine nowhere near us than we have on our own machines historically speaking.
As for the mobile stuff. That’ll figure itself out. Or it won’t and we just have wrappers called apps, that are just fullscreen web page. But again, the AS400, we had a product from Profound that basically wrapped a web app into a “native” app and the product from Profound handled all the stuff like taking a picture and putting on the IFS for an RPG program to pick up.
There’s all kinds of ways around this notion that devs won’t have beefy laptops, that’s literally the way we used to do it for nearly forty years.
Tesla hasn’t put out a successful new product in 20 years, and it continues to barrel right along, with its useless hack CEO hanging on as the richest person in the world.
This conflates a ton of unrelated things. Tesla has been unseated by BYD, but the US Government still is hung up about Chinese car makers enter the US market. Musk is the richest person because his wealth is largely predicated on the US economy, which because of the previous international cooperation stood as the leader. Everyday we are witnessing the US becoming weaker on the global stage, which means that his wealth translates less and less to things outside of the US. The US Stock Market is only a big thing because the US and the dollar is the thing so many things are pegged to. When that ceases being the case, the US Stock Market loses value in an international sense and since Musk’s wealth it largely tied to that, so too does his wealth go down.
But the success of Tesla, isn’t actually success, it’s story of the incredibly sorry state US automakers are in that they can’t provide a solid alternative. But don’t get confused. Tesla sales are slowing drastically. This why we are seeing consolidation of Twitter, Tesla, and SpaceX. They’re being consolidated because they’re hitting rough patches standing alone.
So please don’t confuse the odd situation of Tesla with Bubbles don't HAVE to burst anymore. Those are not correct conclusions here. Ford, GM, and the rest of US automakers are so down BAD at the moment that Tesla is able to shine. That’s really the only thing keeping them floating, US car makers are jokes at this point.
NOBODY who is responsible for enforcing anything like responsible economic activity will EVER allow the bubble to burst
Greed. That’s what you are speaking of. But greedy people come in all kinds of flavors. And there’s no shortage of people who short the market and make mad cash doing so. Greed is universal, but the “upside” (I guess we’ll call it that) is that the system allows greedy people to bet against the system. There are people putting money into this whole thing crashing down and the bigger the fall the bigger the reward. There were a ton of people who make billions when the housing market crashed. People watched 9/11 and were betting that the market would collapse in response. Don’t underestimate people’s greed. There are people who would bet money on innocent people getting shot if the odds were good.
Now all that said, there’s a difference between this AI bubble and the technology. Like if the bubble pops, AI will still be a thing, the bubble is not AI itself, it’s how we’re developing AI at the breakneck speed we’re going at.
How does that change what I said? Remote X is massively more bandwidth hungry than all the others. I mean things like TeamViewer Tensor exist and from what I’ve done, is massively stable. RHEL works perfectly for it. So I don’t want to hear this can’t get a commercially supported… There’s tons of vendors that will thin client for you.
X is a terrible protocol for modern widgets because modern widgets do their best to work around X, that’s literally in the code. Look at GTK or Qt, both are actively trying to avoid working with X when it can and just render directly, because in every metric, it’s better to work directly with the hardware than to go through some slow middle layer that just spins and wastes cycles.
Heck, even the X developers have left X, because it’s done. It’s a dead technology. It doesn’t matter how many people are deploying in enterprise environments, or how well they are deploying those things. There’s no devs on the project and GPUs keep changing. There’s only so many ways you can keep band-aiding a GPU into thinking it’s a giant frame buffer, at some point, there’s going to be a break in the underlying architecture of GPUs, that thinking it’s just VRAM to dump data to, will no longer work. The amount of space on die for the backwards VGA and SHM methods is minuscule these days on cards.
Heck, Using MIT-SHM on X11 for a Pi is something that’s terrible. You usually get worse results because the underlying hardware is woefully optimized for you to treat it like how old video cards worked. You actually do better using hardware acceleration. The usual mantra for X11 apps on Pi is, if you get good results with shared memory, use that and never upgrade your underlying Pi, otherwise always use hardware where possible.
Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?
You don’t even need a good one in today’s standards. At most, most compositors just need to convert pixmap into texture. Anything that supports GLX_EXT_texture_from_pixmap will be enough and at low resolutions, just give it to your CPU, we’re not talking intense operations. But literally anything from the last fifteen years of GPUs has enough power to complete these operations reasonably. Shoot, if you’re thin client on a Pi, the Pi itself has vastly more resources. You can literally have a cluster of Pis if you wanted, labwc is a completely fine compositor for basic thin clients and is basically the replacement of X on Pi. Because X11 was just so terrible because it was so misaligned to how modern GPUs actually work.
What I am saying is X can be whatever in “enterprise deployment”, but X has stopped matching how modern machines look like. Video cards have become more than a bunch of bits dumped into VRAM. No matter how many deployments you’ve done, that doesn’t change that fact. X barely resembles what modern systems of the last twenty-five years looks like. Nobody is working on it. You can have 100 deployments under your belt, nobody is still working on it. No matter how you slice the attributes of X, nobody is actively coding for X any longer. And as for damage and what not, lots of implementations of wl_surface_damage_buffer are using underlying hardware EGL/DMABUF because GPUs are smart enough for the last fifteen years to do that on their own, most compositors utilize that.
Again, it doesn’t matter how many deployments you might have, the hardware does it better than X will ever do it, it’s impossible for X to do it better, there’s nobody there to write better. And it will always be this way, until the heat death of the universe unless someone(s) picks up this massive task of taking care of Xorg. There’s nothing that changes any of this reality.
Does this mean you need to drop X11 tomorrow. No. That’s the entire point of why Xorg was open. So that you can keep it until someone rips it from your cold dead hands. But your stubbornness does not change the fact X is absolutely garbage on the network, is massive inefficient, and most things these days actively try to avoid using X directly and if they have to, they just stuff uncompress bits into a massive packet with zero optimization. You can totally mill grain with a stone wheel today, no one stops you. But you’re not going to convince many people that, that is the best way to mill grain. I don’t know what else to say. I don’t want you to stop using X, but your usage of it doesn’t change any fact that I’ve stated. It’s a very fat, very unoptimized, very slow protocol and there are indeed commercial solutions that are better. I’ve just named one, but there are many. That is just reality, the world has moved past dual channel RAM and buffers. I’ve built VGA video cards, I know how to build a RAMDAC form logic gates, all of that is gone in today’s hardware, and X still has these silly assumptions of hardware that doesn’t even exist anymore.
And the network transparency argument is long gone. While you can indeed network windows over the wire, most toolkits use client side rendering/decorations. So you’re just sending bloated pixmaps across the wire when things like RDP , VNC, etc deal better with compression, damage to the window, etc. And anything relying or accelerated with DRI3 is just NOT network transparent.
Most modern toolkits have moved past X11 because the X protocol was severely lacking, and there wasn’t a good way as a committee to modify the protocol in an unified manner. I mean look at the entire moving Earth that it took for XFixes and Damage extensions. Toolkits wanted deep access to the underlying hardware and so they would go out of their way to work around X, because it just could not keep up.


Shapiro’s previously unreported disclosure, dated Friday, came as part of a list of “corrections” to testimony by top SSA officials during last year’s legal battles over DOGE’s access to Social Security data. They revealed that DOGE team members shared data on unapproved “third-party” servers and may have accessed private information that had been ruled off-limits by a court at the time
Wow… Such shock… Much surprise.
Yeah, they basically were sending XSLs,CSVs of everyone’s shit over non secure channels like Cloudflare shares, that had unknown people in it who didn’t work for the Government.
Because they’re all fucking amateur idiots who think they know what the fuck they’re doing and they don’t. So shit like “Hey brah, pass me that sheet you were working on. NP, posted the link in the Molon labe Discord channel, hey you should check the meme BindensProstate6969 posted LOLOL.”
I mean why did anyone think anything different was ever going to come of this? And now a bunch of private citizen unknowns who are a loose collective fringe group and believe crazy shit has everyone’s details.


Yes, this has a name. Asymmetric warfare. Anyone at the other end of the US military has to engage in it at the present time and considering that the United States will likely vote a Trump 2.0 President down the road is what is driving a lot of these countries to begin ramping up domestic weapon production.
All this will do is take resources that have long since been something given to the people and redirect it to building weapons until the entire world is armed to the teeth and we are exactly where we were just before World War I. However, this will make those who profit from war even richer along the way and thus the people are robbed of a world we could of had of shared opportunity, just because some rich assholes in the United States wanted more money.
This is one of those speeches that’s going to be a thing people study and read deep into, because it perfectly captures this moment in history and the generalized sentiment of the massive shift in the world today.


At this rate it’s Congress’ limp dick energy that let’s them get away with this. It’s absolutely amazing how spineless the institution has become.


But the prize was made of chocolate, so it’s only peace until it’s completely gone. Then it’s back to authoritarianism.


Oh man it’s pretty bad. Heads for the hip bone first and then climbs it’s way up your spinal column and into your ribs. The space between your skin and ribs basically turns into sandpaper and it just eats away at your nerve bundles within your spinal cord. All the while it’ll build this extracellular material that basically acts as a wall to prevent cancer treatments and pain medication from getting in.
And in all of this the cancer is using the damage it’s causing to healthy cells to feed itself. It’s really high up there in incredibly painful ways to die. Makes up less than 1% of all prostate cancers, but has to be the most shit lotto to win.


have allegedly been released by a Department of Homeland Security whistleblower
ooooooo… Ooooo… Whoever this is… OOOOOOOOOOOOOOOOOOOO


Prostate ductal adenocarcinoma. A particular bad cancer. Absolutely terrible for anyone who gets it. Hate how he went, but but don’t mind that he’s gone. Hell, being dead is likely a better situation than the one he was in with the cancer, just an absolutely terrible way to go.


deleted by creator


His cancer was absolutely terrible. I wish it on no one. Dude’s bones were growing daggers into his flesh and tearing him up from the inside and there’s little anyone can give to ease that kind of pain.
I mean, he’s no saint and I’m sure he’s got a nice place in hell waiting for him. But damn, that kind of cancer is the stuff nightmares are made from.


Why are all these politicians so interested in Crypto?
One can only wonder.
Analysis shows that a wallet linked to the token’s deployer removed approximately $2.43 million in USDC liquidity shortly after the peak.
Yeah so this is classic rug pull. Hype, get it into retail, have automated buy in, pull liquidity. Maybe put it back a bit and pocket the fees for retail going crazy.
I read somewhere on here someone say “people wouldn’t have enough time to get scammed.” A lot of this is all bots buying, including small time players who are just using something like Freqtrade python scripts for buying and selling. No one is doing actual research, they’re just a program watching microtransactions. Thirty minutes is eternity.
But yeah, this is exactly why you see so many bros so strong on this and why politicians and Governments are getting into it. The early days are like the early days of completely unregulated stock markets. It’s cash grabs everywhere. Down the road, there’s going to be this moment of ladder pulling to “keep everyone safe”.


The Soviet Union isn’t actively moving a part of it’s nuclear arsenal to the island.
The entire aspect of the crisis was weapons that could actually reach mainland US.
At the moment, no one who borders the Gulf of Mexico has weaponry that could reach say Miami or say Texas City a more strategic point.
So there’s a sense of security by distance in the United States. Now the thing is, this whole thing changes the second any one of these nations procure a medium range weapon. That was the missile crisis. The US found out the USSR was trying to send long range missiles that could strike deep inside US territory.
The same would be true here. The second any of these nations obtain a missile that can hit the US the whole calculus changes.
Can you imagine? “Oh yeah, we invaded Venezuela but they have the ability to start blowing up condos from their own nation.” People would be a whole lot more upset than they are currently.
That’s comforting to hear. Thank you.