• 0 Posts
  • 5 Comments
Joined 10 days ago
cake
Cake day: July 24th, 2025

help-circle
  • The original comment is perpetuating the lie. Intentional or not. They rely on fundamentally flawed soundbites that are precisely crafted for propaganda not to be informative or truthful at all.

    Right off the bat they’re saying “in principle” which presumes the baseline lie that “full self driving” is achieved. Then they strengthen their argument by reinforcing the idea that it’s functionally equivalent to humans (i.e. generalized intelligence). Then the cap it off with “no known flaw”. Pure lies.

    Of course they’ve hedged by implying it’s opinion but strongly suggest it’s the most correct one anyways.

    I’m unsure of and how much has changed

    This demonstrates exactly how effective the propaganda is. They set up scenarios where nobody honest will refute their bullshit with certainty. Even though we know there is no existing system is on par with human drivers. Sure they can massage data to say under certain conditions an automated driving system performed similarly by some metric or whatever. But that’s fundamentally not what they are telling laymen audience. They’re lying in order to lead the average person to believe they can trust their car to drive them as if they are a passenger and another human is behind the wheel. This not true. Period. There is no existing system that does this. There will not be in the foreseeable future.

    The fact of the meta is that technological discussion is more about this kind of propaganda than technology itself. If it weren’t the case then more people would be hearing about the actual technology and it’s real limitations. Not all the spin-doctoring. That leads to uncertainty and confusion. Which leads to preventable deaths.




  • OctopusNemeses@lemmy.worldtoAtheist Memes@lemmy.worldToxic empathy...?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 days ago

    Why do people associate the two inherently. It’s like some sort of rationalization fallacy that insight leads to benevolence. There are people who use empathy with malice.

    I see people saying bigots just don’t understand others enough so that’s why they hate. So if they just understand then the hate will go away. Some of them know very well. They use that knowledge to be more hateful.

    There’s a dark side to social media and the internet in general. People have been using it to get insight into different facets of humanity. Some have been using it to study how to be more effective bigots. I noticed this after lurking subreddits for so many years.

    This is sort of tangential but I’ve found these types of sociopathic people on mental health subreddits. They prey on the vulnerable. Those individuals will dump on anyone who will listen. Quite frankly there are malicious who are stalking around subs like that. They prey on and nudge those individuals further down into darkness.

    Those predators have evoked empathy on an individual which is mistaken for kindness. So they think that person is on their side. How can you tell an individual they might want to reconsider the things this person or people have been whispering in their ear.

    I suspect this happens on other places too like LGBT+ and racial minority subreddits. Though it’s more difficult to understand from the outside. The subreddits for mental health / personal issues is more universally relatable.