The difference is that LLMs generate something that is designed to blend in. It’s supposed to convince someone that it was made by a human.
So, a “vibe constructed” house would probably look like a real house to someone who didn’t know much about houses. But, the pipe from the sink might just go into a space between the walls. The electrical system would be a random mess of lines that would short out as soon as it was connected to the grid. The doors might look right at first, but when you tried to open one you’d see that the hinges were installed in a way that opening it was impossible.
A while ago, there was a a YouTube video of people laughing at AI generated floorplans.
Because of course there was a company that tried to make an AI floorplan generator without a shred of thinking. They posted the “good” ones on their website, and even they had obvious weird details like completely misproportionate rooms, having ten bathrooms in a small house, and just straight up missing doors everywhere.
Yeah, the fact that they still tried to sell or use so much AI slop that still had major flaws should have been a clear sign that you should never trust marketers because they’ll still try to push obvious garbage as if it is amazing. Same thing with those early coke AI ads that were just a series of unrelated scenes that look ok until you look any closer, but someone greenlighted it despite all that.
They were phoning it in so hard they pushed what were technical demos at best as final products.
Though from my perspective, I already had inverse trust based on apparent ad budget even before AI slop showed up because enshitification and apathy about actual quality were rampant long before AI slop was a thing.
The thing is while LLMs do make mistakes when you have something you can drive conformance with they are actually pretty good (eventually)
For electrical or plumbing we can simulate those pretty well and a LLM can iterate until it gets a reasonable output since it can check its work against a simulator.
You could likely live in a house for a bit before you stumbled into something stupid. You could live your entire life and not notice if you never tore out the walls.
However if you went to change a light switch you might discover it used 3 different kinds of screws and the screw terminals are mislabeled.
The difference is that LLMs generate something that is designed to blend in. It’s supposed to convince someone that it was made by a human.
So, a “vibe constructed” house would probably look like a real house to someone who didn’t know much about houses. But, the pipe from the sink might just go into a space between the walls. The electrical system would be a random mess of lines that would short out as soon as it was connected to the grid. The doors might look right at first, but when you tried to open one you’d see that the hinges were installed in a way that opening it was impossible.
A while ago, there was a a YouTube video of people laughing at AI generated floorplans.
Because of course there was a company that tried to make an AI floorplan generator without a shred of thinking. They posted the “good” ones on their website, and even they had obvious weird details like completely misproportionate rooms, having ten bathrooms in a small house, and just straight up missing doors everywhere.
Yeah, the fact that they still tried to sell or use so much AI slop that still had major flaws should have been a clear sign that you should never trust marketers because they’ll still try to push obvious garbage as if it is amazing. Same thing with those early coke AI ads that were just a series of unrelated scenes that look ok until you look any closer, but someone greenlighted it despite all that.
They were phoning it in so hard they pushed what were technical demos at best as final products.
Though from my perspective, I already had inverse trust based on apparent ad budget even before AI slop showed up because enshitification and apathy about actual quality were rampant long before AI slop was a thing.
You might be shocked to hear this but all of those issues happen with no AI involved at all. Every time a bastard is born you should slap an engineer.
Seen a home where the dish washer pipe just went into the wall and ended. Mixed copper and aluminum wire. The way doors opened was random.
The vibe builder might not be much different than what we have.
The thing is while LLMs do make mistakes when you have something you can drive conformance with they are actually pretty good (eventually)
For electrical or plumbing we can simulate those pretty well and a LLM can iterate until it gets a reasonable output since it can check its work against a simulator.
You could likely live in a house for a bit before you stumbled into something stupid. You could live your entire life and not notice if you never tore out the walls.
However if you went to change a light switch you might discover it used 3 different kinds of screws and the screw terminals are mislabeled.
That was my first thought as well, it would have the appearance but not the function.