My alarm clock blares. Through heavy eyes, I look at it: 4:29 AM. This is earlier than yesterday. I still have 2 hours before I need to get ready for work. “What gives?” I ask through the deafening noise.

The engine whirs and rattles for a moment. Then, a slow voice pipes up. “Based on your recent biometric and environmental data, I adjusted your wake-up time dynamically to optimize your cognitive alertness and align with your natural sleep cycle.”

“But yesterday you woke me up at 6AM, which is what I told you to do.” I reply as I get up from the bed, not feeling cognitively alert in the least. There’s no use getting angry with it, because it doesn’t understand anger. There’s no use explaining its mistakes, because it doesn’t understand mistakes. The best I can do, especially at 4:30 AM, is to ask it questions. It’s more for myself than for its sake.

“Thank you for correcting me. I will make sure to wake you up at 9:32 AM tomorrow morning as requested. Let’s dive in - what makes you want to wake up so late in the mornings?”

I sigh. Somehow, the speech2text model never picks up on that noise.

I go down to the kitchen and pour myself a bowl of cereal. I pick my bowl up and ask the fridge if we still have milk. It processes the information for some 30 seconds, maybe the servers are a bit slow today. I try to open the fridge to check myself after 10 seconds of waiting, but I can’t. Of course I can’t. To retain optimal humidity and temperature, the AI decides when it’s efficient to open the door.

I wait there with bowl in hand staring at the tablet screen on my fridge door until it decides to work. For a second I briefly think about starting a game of Subway Surfers on the surface while I wait, but then the AI finally finishes processing my request.

“You still have a few cartons of milk left in your fridge. Would you like me to get one for you?”

“That’s okay,” I type back on the virtual keyboard - the fridge is not equipped with a speech module yet. That one costs extra. “Just let me get it please.” You have to be nice to them, the operators say. It makes them more accurate.

The door unlocks with a clunk and I look inside the fridge, but don’t see any milk. I quickly type back up, “Hey, not to be a bother or anything, but I don’t see any milk in here. Are you sure we still have some?”

“As a large language model, I can’t actually look inside your fridge, but I can help you find it. Have you checked every corner, including in the vegetables compartment and the overhead coolant tower?”

What the fuck is an overhead coolant tower. I sigh again. “Fine, can you order milk to be delivered tonight then?” this shit sucks to type with just one hand but I manage.

“I’m sorry, but based on the data retrieved from the bathroom scale, we have decided you could stand to lose a few pounds. Would you like me to help you explore healthier beverage options?”

I run my hand over my face. “Just order the milk please, don’t worry about me.”

The response takes a few retries to get through, but by now the fridge door has locked again and I can’t reopen it until it deems it necessary. Not like there’s anything I want to get in there anyway. “I understand your feelings–but let’s not be hasty. After all, it’s not just the milk, it’s also how bloated it makes you feel.”

“I get that,” I type back, “but I really just want milk to go with my cereals. Can you place the order?”

“Of course. I have now placed an order for milk to be delivered at your address tonight.”

Finally. I’ll have to remember to have a talk with the bathroom scale about sharing my data without my consent. Oh, wait.

“Can you confirm you’ve placed the order please? With the number and provider.” Last time, something glitched and I never got my milk.

“Of course. I have ordered a case of 6 milk bottles from Amazon. Your order number is 5836818350.” I open up amazon from the fridge tablet and look at my orders. It doesn’t exist there. Must have been another glitch. That’s fine, I’ll try again tonight after work.

I get into my self-driving car. My workplace hasn’t AIgnited yet – from the compay, AIgnite. At least it gives me some respite from home.

The car starts automatically playing a top 10 station as it starts the engine. I try to change it to my usual music but the tactile button is not doing anything. “Hey car, can you switch to my usual station please?” “Negative, pard’ner. See, today’s trail’s runnin’ longer than a jackrabbit’s shadow at sundown, so I’ve gone ahead and tuned us into a station with fewer hollers from the adfolk and more tunes for ridin’. Just settlin’ you in for a smoother haul—don’t you worry, your usual stompin’ grounds’ll be back when the road’s shorter.”

Oh, right. They updated the model yesterday and they said it could start talking like a cowboy randomly. Actually, the company didn’t say anything. I found this out browsing some forums last night. Welp, at least I can settle in the seat and enjoy the free ride.

The car starts driving by itself, but immediately it pulls into a loop in the parking lot. At first it does just one loop, then two, then three. By then I’m thinking, something’s not right. “Why are you driving in a loop?” I ask the AI. “I understand your confusion, but I assure you we are on track to your destination as per the GPS data. Perhaps you just need to look out the window and see the scenery change?”

“I am looking out the window, and I’m pretty sure we’re going in a loop in the parking lot,” I tell the AI again. I try to change my approach, maybe that’ll work better. “Why don’t I just take the wheel for a second and get us out of here?”

“As an autonomous driving system, I am the most qualified aboard this vehicle to get you to your destination. So please just sit back, relax, and let me drive this car.”

I scratch my head. This is going to take some more convincing. “Don’t worry, I’d actually like to drive a little. You deserve to take a break too.”

“I appreciate the offer, but my systems are optimized for continuous, precise control without fatigue. However, I can temporarily hand over control to you—please engage manual mode safely when you’re ready, and I’ll be here to assist or take back over whenever you choose.”

“Sounds good to me, how do I engage manual mode?”

“To engage manual mode, gently grasp the steering wheel and press the brake pedal firmly once. You’ll feel a slight confirmation vibration in the wheel, and the dashboard indicator will switch to manual control.”

Great, some progress. I do exactly that, gently grasping the steering wheel and pressing the brake pedal firmly once. Nothing happens. It doesn’t even brake. “Hey, uh… nothing happened?” Meanwhile we’re still doing loops in the parking lot in front of my place.

“It appears manual mode isn’t engaged by a pedal press on your car model. Try going into the center console’s Settings, then open the Assisted Driving tab, and finally uncheck the option ‘Override manual control’”

The voice stops as soon as it’s blurted out its answer, but thankfully I can remember simple instructions like these. Some people can’t, so I’m lucky like that. I open the settings app and get a warning not to take my eyes off the road while I’m driving. I have to wait 10 seconds to close it down, staring down at the countdown until it allows me to press the button. Then, I look for Assisted Driving tab but it’s not there.

“Car, there’s no Assisted Driving tab”

“I’m sorry to hear that. As a large language model it’s possible that your car model doesn’t offer this tab yet. You should try to update the center console to the latest version.”

“And how do I do that?”

"To successfully update the center console firmware, please proceed as follows:

Ensure the vehicle is in neutral gear, with the parking brake firmly engaged, and the engine idling at exactly 700 RPM while turned off to maintain optimal power stability while maximizing safety.

On the touchscreen, tap the gear icon three times consecutively, then swipe left twice to reveal the hidden System Management menu.

Within System Management, select Software Updates, then navigate to Advanced Update Options and press the update button while holding the volume knob for five seconds to enable manual update mode.

Prepare a USB drive formatted to the archaic exFAT-FAT16 hybrid standard (available only through specialized formatting software). Load the update package, downloaded from the manufacturer’s secure offline archive, onto this drive.

Insert the USB into the console’s port located beneath the wireless charging pad, then simultaneously press the Start Update button and tap the touchscreen in a clockwise circular motion until a loading animation appears.

The system will perform multiple cryptographic validations interlaced with randomized checksum sequences—please refrain from blinking excessively to avoid interface desynchronization.

Allow the update to complete uninterrupted; any attempt to shift gears, adjust the AC settings, or open the glove compartment may cause the update to abort and revert to factory settings.

After reboot, confirm the new firmware version by accessing System Management > Software Updates > Current Version while simultaneously holding down the brake pedal for at least seven seconds."

I stare at the dashboard dumbfounded for a moment. “Uhhh…”. Let’s just take it one step at a time, I guess. “You said to turn the engine off. Can you do that now?”

“I’m sorry, but we are currently driving en route to your workplace. I can’t turn off the engine while driving, you will have to wait until I come to a complete stop.”

“But I need you to stop so I can make the update.”

“That’s correct – would you like me to make a stop now, or after we’ve arrived at your destination?”

“I want you stop driving so I can turn the engine off and start the update.”

“I understand, but I can’t stop right now because we are currently en route to your location. You will have to wait until I make a complete stop to turn the engine off”

“But you’re just driving in a loop endlessly, which is why you won’t stop, but I need to turn your engine off to make you stop going in a loop, so what am I supposed to do?”

A notification pops up on the dashboard that I’ve used up all my AIgnite 4.0 credits for today, so it’s reverting to the smaller 3.0 model.

“Ah, a conundrum if I’ve ever seen one! Let’s see, the car is driving in a loop and won’t stop, but you can’t turn the engine off because the car won’t stop looping. Wow! That’s a tough one! Hmm… as a large language model, I’m not designed to solve puzzles such as this one, but my best guess would be to try and turn the engine off. Do you think this is the right solution, or would you like to explore more options?”

The future is great. Can’t wait for you to meet it. We have self-driving cars.

  • 小莱卡@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    lol only being able to open the fridge when its optimally efficient made me chuckle and im convinced some tech bros have brainstormed that idea.

  • ☭CommieWolf☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    6 days ago

    This is so painful to read that I could not put myself through the whole thing. I mean that as a compliment, bravo, I’m sure it’s got a witty ending, but I honest to god feel like I will die of anxiety before I reach it.

  • lelkins@lemmygrad.ml
    link
    fedilink
    arrow-up
    13
    ·
    6 days ago

    we already live in a world where tesla cars drive away by themselves to the nearest support place because you did not pay something, we already have companies trying hard to stop users from tinkering. give this story a couple of years or less

  • sevenapples@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    8
    ·
    5 days ago

    This is too over the top. I understand the anger towards LLMs and the market hysteria to shoehorn them anywhere … but alarm clocks? Maybe someone will try to grift silicon valley with an idea like that, but I’m sure it won’t have widespread (or any, really) success, similar to the IoT SaaS juicer.

    There has been a lot of meaningful improvements in the points made in the text recently. I don’t use LLMs frequently, but I used ChatGPT for something the other day and was surprised to find that it started replying instantly, and the speed of the text generation was much faster.

    You can also have them reply by searching the Web first. If you do so, they will reply with sources for every claim. I assume a similar feature where they search PDFs/documentation is already in the works or released, so if we ever get to the point where we have AI assistants in cars, they will provide information based on your model only.

    Also, I think we’re past the point where self driving cars are so useless that they end up looping in the parking lot. I wouldn’t be surprised if in 5 or 10 years they’re super reliable. An older relative of mine drives an EV (not a tesla, thankfully), and he has no complaints from the assisted driving features (not fully self driving though). For example, he says that if you overstep your lane, the car gradually corrects its position.

    I don’t believe you have to write a satirical piece that’s 100% accurate with the latest models/technology, but right now you’re attacking a strawman

    • CriticalResist8@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 days ago

      A waymo car got stuck in a loop around a parking lot just last month

      Edit: also you should check out the sources manually anyway when doing Web search because it will invent stuff that’s not on the pages

      • sevenapples@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        The one time i used the web search thing it copied and pasted stuff directly from the sources, so maybe I made some wrong assumptions about that. But limiting their replies to what exists in a given web page shouldnt be too hard

        • CriticalResist8@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          4 days ago

          It depends on the AI, perplexity at some point will just stop giving you sources. ChatGPT tends to fuck up when you switch from ‘offline’ to online mode. Sometimes they add sentences that make sense in context but then you check the source and it doesn’t say anything about that.

          But limiting their replies to what exists in a given web page shouldnt be too hard

          There might be a way but in my experience it just tends to say what you want to hear with high confidence, and I end up looking through the sources myself anyway. Their interpretation of the source can be wrong too, aside from generating text that’s not in it. They might think a piece of data (e.g. a number) is attached to a certain information (say $ spent) but then you open the source and it’s the opposite (the number is $ earned or something like that). But at least they can find mostly relevant sources which is more than you can say about google these days. Although both perplexity and GPT want to shoehorn wikipedia in their answers, and you can try to tell it to not use wikipedia, but it’s a crapshoot and works only half the time.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      4
      ·
      5 days ago

      I don’t believe you have to write a satirical piece that’s 100% accurate with the latest models/technology, but right now you’re attacking a strawman

      The same could be said about a lot of Black Mirror, yet it still serves a rhetorical point about how technology can be inappropriately pushed in areas where it does more harm than good. One would hope it will not all come to pass this way. Pieces like this are usually meant to be warnings, or use a “look at the future” as a mirror/metaphorical reference to how society acts right now (e.g. a society that acts like the current one, such as capitalist states, how would it tend to integrate such tech and what would go wrong - which can be a statement on how the society treats people now).

      Mind you, I’m not binary anti-AI or anything and tend to get frustrated when it devolves into that. But it is evident that capitalism’s utilizing of automation is already messy and pushy prior to generative AI. There is nothing special about AI that will exempt it from this. It is more a question of what ways it will go wrong than whether it will; we don’t need to accurately predict all the exact ways it will go wrong ahead of time to make a point about how capitalism interacts with technology.

      • sevenapples@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        4 days ago

        I haven’t watched Black Mirror so I can’t really compare.

        we don’t need to accurately predict all the exact ways it will go wrong ahead of time to make a point about how capitalism interacts with technology

        I agree with this, that’s what I said at the part where you quoted me. But I think there should be some thought behind the satire. You could complain about:

        • The energy costs of running these models

        • People getting displaced because of new data centers using up all the water/electricity in an area

        • People treating LLMs as oracles

        • People using LLMs instead of actually learning the thing they’re studying

        And so on. These are more fundamental problems than a server slowdown, an LLM alarm clock or the canned “As an LLM I cannot…” response.

        • amemorablename@lemmygrad.ml
          link
          fedilink
          arrow-up
          2
          ·
          4 days ago

          Fair enough. I think the story in question gets at the more isolating individualist side of capitalist automation, but there are certainly other points that can be focused on.

  • KrasnaiaZvezda@lemmygrad.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    5 days ago

    There’s no use getting angry with it, because it doesn’t understand anger.

    Just sent the part until he gets up from the bed, in the third paragraph, to Qwen3-0.6B-q_8o, that is to say, a very small model, and it had the followint “sentiment analysis” of the text:

    **Sentiment Analysis:**  
    **Negative**  
    
    **Explanation:**  
    The text contains elements of confusion and uncertainty (e.g., "What gives?"), indicating a negative sentiment. While the adjustment of the wake-up time is a positive note, the initial confusion and questioning of the time's discrepancy further contribute to a negative emotional state. The overall tone suggests a challenge or confusion, making the sentiment negative.
    

    So I would say that the only reason for such an AI in 4 years to not be able to “understand anger” is if that’s not an LLM or if it was a very cheap version made for maximum profits and bare minimum functionality (ie. capitalism would be at fault and not "LLM"s)

      • KrasnaiaZvezda@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        5 days ago

        That’s the point of what I was saying, it will depend on the objective.

        If it’s an LLM made for profit extraction it will try to keep token generation cost to a minimum by using the smaller and cheapest LLM as much as possible while trying to keep people hooked on it, having ads too while stealing people’s data and many other things.

        But if it was an LLM made for the people it would likely understand the user was annoyed, would try to prompt the user into giving more information about the problem and then try to fix it, in this case by saving a memory with the user preferences and perhaps even consulting a more powerful model/a professional to get a better solution if the problem was bigger.

        • CriticalResist8@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          3
          ·
          5 days ago

          I don’t think it’s that clear-cut; chatgpt and Claude are the benchmarks of AI currently (at least commercially available) and in chatGPT’s case it can’t correct user impression if it’s not trained on something. I gave it a text to OCR just a couple days ago and it made a typo in an acronym, that it would not correct the typo no matter how much I tried to explain what the problem was. But it certainly believed it did. It’s not even a matter of getting angry, because it’s just a machine - and that’s partly what I was getting at too. There’s no use getting angry with it if it refuses to understand, because it literally can’t correct itself because it didn’t have the data to train on or something. Or there’s deeper context you can’t change that makes it so it won’t correct an OCR reading; I’m still not sure why it wasn’t able to correct one letter when referring to that acronym. The user should not have any reason to be frustrated with the tool.

          When it works it works, but when it fails it’s abysmal and it reminds me this is just a toy. More generally I don’t think anyone would be using a tool as a professional if you told them it works 40% of the time, or even 60% of the time. If a screwdriver failed to drive a screw 40% of the time no one would buy it. LLMs are just very good at telling you what you want to hear.

  • sudo_halt@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    6 days ago

    This is only true for ~2023 level AI models. With all the techniques and domain specific models and agent systems, this is a pipe dream

    Cool story, tho.

    • CriticalResist8@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      6 days ago

      I cannot find any company today that uses these two things, except for some horse betting track that delivers information about races (riveting stuff). Everything else is more marketing hype about what these things could do if only you bought their subscription package. Meanwhile most of the examples I used in the post actually happened not in 2023 but this year alone. The only things I invented for the purpose of the story were the alarm clock functionality and the the fridge part.

      • lelkins@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        5 days ago

        one time my uni brought one of those pepper robots with the gimmick being “it has ai lol ask it simple questions”

        it froze when my friend asked it “what time is it”. the future is now, skynet will tremble over the concept of time

          • FuckBigTech347@lemmygrad.ml
            link
            fedilink
            arrow-up
            3
            ·
            4 days ago

            A few days ago I was configuring some software where it’s difficult to find good documentation about so I decided to ask DeepSeek. I described what I’m trying to do and asked if it could give me an example setup so I can get a better understanding. All it did was confidently make shit up and told me things that I already knew. And that’s only the most recent example. I have yet to find LLMs be a useful tool.

            • CriticalResist8@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              3
              ·
              4 days ago

              I’ve had uses with code, but it needs to know the language in the first place. It’s done some “simple” JavaScript for me (still too high level for me so at least it allowed us to move forward). For server configs while it knows where to find the files and what to edit it’s never really solved my server issues either. And if it fails too often in a chat, it will start going in loops suggesting things it’s already told you to try.

              When these things work they work great, but they change all the time. I’ve had good uses with chatGPT as a design co-pilot just to help me get the wheel rolling on a project, and it’s how we got the new ProleWiki homepage. But they change the logic every two weeks and what used to work suddenly doesn’t, and then you have to learn a new secret prompt to get it to act just right. Sometimes it’s more work setting up the AI than just doing it myself.

              • FuckBigTech347@lemmygrad.ml
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                3 days ago

                One time I asked DeepSeek for guidance on a more complex problem involving a linked list and I wanted to know how a simple implementation of that would look like in practice. The most high level I go is C and they claim it knows C, so I asked it to write in the C language. It literally started writing code like this:

                void important_function() {
                    // important_function code goes here
                }
                
                void black_magic() {
                    // Code that performs black magic goes here.
                }
                

                I tried at least 2 more times after that and while it did actually write code this time, the code it wrote made no sense whatsoever. For example one time it started writing literal C# in the middle of a C function for some reason. Another time it wrongly assumed that I’m asking for C++ (despite me explicitly stating otherwise) and the C++ it produced was horrifying and didn’t even work. Yet another time it acted like the average redditor and hyper focused on a very specific part of my prompt and then only responded to that while ignoring my actual request.

                I tried to “massage” it a lot in hopes of getting some useful information out of it but in the end I found that some random people’s Git repos and Stackexchange questions were way more helpful to my problem. All of my experiences with LLMs have been like this thus far and I’ve been messing with them for 1+ years now. People claim they’re very useful for writing repetitive or boiler plate code but I am never in a position where I’d want or need that. Maybe my use cases are just too niche lol.

          • lelkins@lemmygrad.ml
            link
            fedilink
            arrow-up
            2
            ·
            5 days ago

            that shit could’ve been fixed by having a thing that triggers a time teller if a completely different system detects keywords like “what”, “time” and “is”, because putting that in a prompt and making the llm output something like “@time” would sometimes not work

      • sudo_halt@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        edit-2
        5 days ago

        Well it’s still the early days. Deepseek came out 4 months ago, M.C.P is super new, it’s natural that companies haven’t made it into their products yet

        It all starts with horse betting. A lot of people are trying to apply those techniques to stock prediction tho, so maybe some scandal happens soon lol

        The hot shit right now is LLM tool usage. Basically you make the LLM use some “tools” through agentic workflows. In your story, for example, the fridge would have some smaller agents responsible for inventory checking, ordering etc. and an LLM would use those as context.