• WraithGear@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 month ago

    it doesn’t. it after the fact evaluates the actions, and assumes an intent that would get the highest rated response from the user, based on its training and weights.

    now humans do sorta the same thing, but llm’s do not appropriately grasp concepts. if it weighed it diffrent it could just as easily as said that it was mad and did it out of frustration. but the reason it did that was in its training data at some point connected to all the appropriate nodes of his prompt is the knowledge that someone recommended formatting the server. probably as a half joke. again llm’s do not have grasps of context