Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. If youāre wondering why this went up late, I was doing other shit)
(EDIT: Changed ā29th Februaryā to ā1st Marchā - its not a leap year)


I just had one of those ābrain-doing-brain-stuff-goodā moments (I think normal people call them delusions?) pondering about why it is that AI code extruders are seeing widening adoption.
tl;dr - thereās a bunch of people uncurious about the nature of the abstractions they use and itās a tragedy.
First a moment of background: My first software dev position was using Lisp and one of the most powerful concepts built into the language runtime was the macro facility, the ability to write code that writes code. The main downsides of Lisp are obsequious Lisp developers and hard-to-master C foreign function interfaces, so what you have is a toolchain of abandoned dependencies made by some real annoying characters, but I digress. The ability to write code that writes code is a powerful concept.
I moved on to working with .Net which sometime around the 4.6 version release got enhancements to built-in language utilities. This led to better code-generators for numerous purposes (certain DI containers started to do dependency resolution at build time for example).
I did Scala for a time, which had a macro facility that was hot garbage and was rewritten between 2 and 3, so I never bothered to learn it. Around this time the orgs I worked for were placing an emphasis on OpenAPI / swagger specs for reasons I donāt know because while there was tooling that could be used to generate both the entire http client and the set of interfaces used by the surface, we did neither (where I am at right now we still do neither form of code gen).
Anyways, things like code generation whether via external tooling or internal facilities is magical but it is deterministic magic: Identical input should yield the same result. It is also hard to use well. The ergonomics of the OpenAPI / Swagger codegen tooling is pretty bad though not impossible, and the whole thing under the hood is powered by mustache templates. The .Net stuff is still there and works well, but I donāt think many work places want to invest in really understanding that tooling and how it can be employed. Lisp well always be Lisp, good job Lisp. There are other examples of code generation used for practical ends I am sure.
The point is that code generation requires being able to think and define certain forms of abstractions outside of the target functionality of a single program and while itās not hard to do that thinking, itās just high enough of a bar that your typical enterprise engineer wonāt engage with that (but will always be amazed by the results!).
AI Code Extruders change the cognitive burden that would be required for code generation into something that I guess appeals to engineers. You can specify something in the abstract and a Do-What-I-Mean machine may churn up something minimally useful, determinism be damned. Not only would an engineer not need to consider the abstraction layer between their input and the code but they would be unable to fully interrogate that abstraction because the code extruder does not need to show its work.
Just a thought. Probably a very silly thought.
I think thereās definitely something to that. It seems like it rhymes with my own interpretation, at least. I did 7 years of support for backend network infrastructure (load balancing, SSL optimization, etc) and one thing that I consistently found was that the way the applications and tech services at most of these companies were structured everything was treated like a complete black box by everyone who wasnāt specifically working on that element. Like, I would find myself trying to trace a problem through the application flow and every other request was essentially being handled by a completely different team and the people I was talking to didnāt even understand the questions I was asking. That level of siloed work is somewhat necessary given the sheer complexity of the systems and infrastructure that modern applications rely on, but also seems to cultivate a certain level of incuriousity. Whatās happening inside those black boxes doesnāt even get considered because it doesnāt matter; itās somebody elseās problem right up until it suddenly isnāt. The current crop of confabulation machines take this tendency to a kind of logical extreme where nobody can adequately look into the black box to understand what itās doing, and that will similarly be perfectly fine up until it very much isnāt and there wonāt be anyone to call to figure out how to fix it.
@BurgersMcSlopshot @BlueMonday1984
I am cleaning up behind uncurious people that have made some vexing category errors.
I feel this, I was dealing with this at a prior employer.
I think youāre actually right on the money here, nowhere near delusional, especially since you come from a Lisp background. I really appreciate Lisp (and Smalltalk) for the ālive-codingā and universal inspectability/debuggability aspects in the tooling. I appreciate test-driven development as Iāve seen it presented in the Smalltalk context, as it essentially encourages you to āprogram in the debuggerā and be aware of where the blank spots in your program specification are. (Although Iām aware that putting TDD into practice on an industrial scale is an entirely different proposition, especially for toolchains that arenāt explicitly built around the concept.)
However, LLM coding assistants are, if not the exact opposite of this sort of tooling, something so far removed as to be in a different and more confusing realm. Since itās usually a cloud service, you have no access to begin debugging, and itās drawing from a black box of vector weights even if you do have access. If you manage to figure out how to poke at that, youāre then faced with a non-trivial process of incremental training (further lossy compression) or possibly a rerun of the training process entirely. The lack of legibility and forthright adaptability is an inescapable consequence of the design decision that the computer is now a separate entity from the user, rather than a tool that the user is using.
Iāve posed the question in another slightly less skeptical forum, what advantage do we gain from now having two intermediate representations of a program: the original, fully-specified programming language, as well as the compiler IR/runtime bytecode? I have yet to receive a satisfactory answer.