Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


This scream into the void has been on my mind for a while: Apparently I work for an AI company now.
Kinda.
When I had the interviews with my now-employer at the beginning of the year, they were an open-source cybersecurity startup. Everything sounded great, we got along, signed the contract. I took a long vacation before starting the position, and when I got back, I was⦠amused? bewildered? to find that a), we are no longer open source; and b), we have pivoted, hard, towards AI.
Luckily, I still get to work 100% of the time on the core (cybersecurity) product (which is actually a really good and useful thing, sorry, not going to be more specific), itās just that part of the dev team, as well as all of marketing and sales, now work on building and selling an AI product built on top of that.
At least itās not a wrapper around ChatGPT, and does offer something kinda new and actually beneficial, but still, itās an LLM product.
Now, for the actual scream-into-the-void: Once a month, in a company-wide meeting, I have to observe how people praise LLMs to the the moon, attribute nonsense or downright bugs to something akin to proto-sentience, and give absurd estimates of profitability based on the idea that AI will totally be used everywhere and by everyone, very soon now, youāll see. What finally prompted (pun intended) me to post this is the CEO yesterday unironically referencing AI 2027ās āpredictionsā.
Canāt wait for the bubble to burst. Iām really curious to see if Iāll keep my job through that. At the end of the day, the stuff I work on luckily has nothing to do with AI, and basically every other application of the product makes more sense; but now the entire company has shifted gears towards AIā¦
You would hope that, if they take their job seriously, the managers who predict AI mooning, that they also write predictions for the other situations. And not just the best case scenarios.
I mean⦠yeah, you would hope that, wouldnāt you? And to be fair, they were selling the product beforehand as well. Itās just apparently a lot easier to sell the AI angle right now.
Yeah hope for your job that they donāt bet the company on the 2027 thing. Because that would be quite the failure of management. If they just use it as a tool for sales I get it (donāt like it, but I get it), the CEO being all in on it is worrying though, which is why I hope he has also made predictions for what if he is wrong and AI never advances anymore significantly (or even becomes a liability as a sales tool).
Realistically, the bubble bursting just means going back to pre-2025 target markets. But who knows.
For the company hopefully, but it could also turn into āany mention of AI gets interpreted as a bad signā and you need to pivot before that affects the bottom line. (clearly the pendulum is towards it being a good sign atm).
crikey. I assume the CV is in good order and kept updated on the job sites just to see what comes in.
Yeah⦠(Un)fortunately, everything not AI-related is pretty great in regards to the company, so Iāve decided to stick with it and hopefully still be there after the bubble bursts, unless they try to reassign me to the AI-project, then Iām gone.
take care and step carefully. thereās a moment where the stress from working for a company with goals that counter your personal ethics is going to be hard to bear, and the worst thing you can do then is to change your value system to reduce the cognitive dissonance.
Thanks, I appreciate the concern. Luckily, the entire core dev team is very critical/cynical about AI, itās not just me, everyone I directly work with also wants to build the product for its intended purposes, not for AI-use. I think that somewhat lessens the pressure to go with the narrative.
Plus, I canāt see that happening while participating in discussions on this lemmy instance :D
In any case, thank you for the sound advice,
Mawhrin-SkelFlere-Imsaho!