(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

  • 0 Posts
  • 11 Comments
Joined 3 months ago
cake
Cake day: January 24th, 2025

help-circle
  • What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.

    Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.



  • As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they’re not good at analysis, because that’s not what they were optimized for.

    This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that’s presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.

    I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:

    The minute I asked it, I assume a bit of sleight of hand happened, where it’s been set up so that if someone asks a question like that it’s forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.

    I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don’t think it does- if it knew that the book wasn’t real, why would it have mentioned it in the first place?

    I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn’t share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can’t even tell me what the name of the book is correctly; and, I’m expected to believe what it says about the book itself?


  • It’s complicated. The current state of the internet is dominated by corporate interests towards maximal profit, and that’s driving the way websites and services are structured towards very toxic and addictive patterns. This is bigger than just “social media.”

    However, as a queer person, I will say that if I didn’t have the ability to access the Internet and talk to other queer people without my parents knowing, I would be dead. There are lots of abused kids who lack any other outlets to seek help, talk to people and realize their problems, or otherwise find relief for the crushing weight of familial abuse.

    Navigating this issue will require grace, awareness and a willingness to actually address core problems and not just symptoms. It doesn’t help that there is an increasing uptick of purity culture and “for the children” legislation that will curtail people’s privacy, ability to use the internet and be used to push queer people and their art or narratives off of the stage.

    Requiring age verification reduces anonymity and makes it certain that some people will be unable to use the internet safely. Yes, it’s important in some cases, but it’s also a cost to that.

    There’s also the fact that western society has systemically ruined all third spaces and other places for children to exist in that isn’t their home or school. It used to be that it was possible for kids and teens to spend time at malls, or just wandering around a neighborhood. There were lots of places where they were implicitly allowed to be- but those are overwhelmingly being closed, commericalized or subject to the rising tide of moral panic and paranoia that drives people to call the cops on any group of unknown children they see on their street.

    Police violence and severity of response has also heightened, so things that used to be minor, almost expected misdemeanors for children wandering around now carry the literal risk of death.

    So children are increasingly isolated, locked down in a context where they cannot explore the world or their own sense of self outside the hovering presence of authority- so they turn to the internet. Cutting that off will have repercussions. Social media wouldn’t be so addictive for kids if they had other venues to engage with other people their age that weren’t subject to the constant scrutiny of adults.

    Without those spaces, they have to turn to the only remaining outlet. This article is woefully inadequate to answer the fundamental, core problems that produce the symptoms we are seeing; and, it’s implementation will not rectify the actual problem. It will only add additional stress to the system and produce a greater need to seek out even less safe locations for the people it ostensibly wishes to protect.





  • My suggestion is to either change the context you play games in, or pick games that are very cognitively different from what you normally do at work.

    You can change your context with a new console, but I think it may be cheaper to do something like buying a controller and playing games while standing up, or on your couch/armchair, or playing games while sitting on a yoga ball. The point is to trick your brain, because it’s associated sitting at a desk in front of a computer with boring tedium. Change the presentation and your subconscious will interpret it differently.

    You can also achieve this by identifying the things that you have to do in your job that mirror videogame genres you enjoy and picking a game that shares few of those qualities.

    I worked at the post office for years, doing mail processing, and my enjoyment of management and resource distribution style games went down sharply during that time because of the cognitive overlap- I played more roguelikes and RPGs as a consequence.


  • Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it’s almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that’s because I’m weird.

    Like, it’s probably fine for anything you’d rely on a skimming a wikipedia article for. I wouldn’t use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you’re just like, “Hey, what’s Ice-IV?” then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it’s less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.

    My main point of caution for people reading this is that you shouldn’t rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn’t risk an AI hallucination, even if it’s unlikely.


  • I’m not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I’m a weirdo and so will pick up on the idea that the prompt leads the style.

    It’s not like the LLM actually understands that you are asking questions, it’s just that it’s generating a procedural response to the last statement given.

    Saying please and thank you isn’t the important part.

    Just preface your use with, like,

    “You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon.”

    And you’ll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.

    Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there’s nothing I can do to discourage people from putting their trust in them.