• 0 Posts
  • 5 Comments
Joined 1 年前
cake
Cake day: 2023年12月14日

help-circle

  • That’s the point of what I was saying, it will depend on the objective.

    If it’s an LLM made for profit extraction it will try to keep token generation cost to a minimum by using the smaller and cheapest LLM as much as possible while trying to keep people hooked on it, having ads too while stealing people’s data and many other things.

    But if it was an LLM made for the people it would likely understand the user was annoyed, would try to prompt the user into giving more information about the problem and then try to fix it, in this case by saving a memory with the user preferences and perhaps even consulting a more powerful model/a professional to get a better solution if the problem was bigger.


  • There’s no use getting angry with it, because it doesn’t understand anger.

    Just sent the part until he gets up from the bed, in the third paragraph, to Qwen3-0.6B-q_8o, that is to say, a very small model, and it had the followint “sentiment analysis” of the text:

    **Sentiment Analysis:**  
    **Negative**  
    
    **Explanation:**  
    The text contains elements of confusion and uncertainty (e.g., "What gives?"), indicating a negative sentiment. While the adjustment of the wake-up time is a positive note, the initial confusion and questioning of the time's discrepancy further contribute to a negative emotional state. The overall tone suggests a challenge or confusion, making the sentiment negative.
    

    So I would say that the only reason for such an AI in 4 years to not be able to “understand anger” is if that’s not an LLM or if it was a very cheap version made for maximum profits and bare minimum functionality (ie. capitalism would be at fault and not "LLM"s)