• 4 Posts
  • 696 Comments
Joined 2 years ago
cake
Cake day: August 5th, 2023

help-circle

  • I agree with you in general, I think the problem is that people who do understand Gen AI (and who understand what it is and isn’t capable of, and why), get rationally angry when it’s humanized by using words like these to describe what it’s doing.

    The reason they get angry is because this makes people who do believe in the “intelligence/sapience” of AI more secure in their belief set and harder to talk to in a meaningful way. It enables them to keep up the fantasy. Which of course helps the corps pushing it.



  • atrielienz@lemmy.worldtoADHD@lemmy.worldeye contact
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I didn’t recognize that I had issues with it but I had issues with it for years. Subconsciously (probably because my mother was such a stickler for eye contact when we were in trouble) I look at facial features and focus on one that lets me fake eye contact, but I don’t often meet the eyes of people.

    I’d also like to note that people with both ASD and ADHD often have symptoms that can mask each other, allowing us to pass as not having one or the other Neurodivergences. So just because someone doesn’t see ASD in your behavior doesn’t mean that you don’t have it.


  • I ran into this problem with a dual boot of windows 10 and bazzite where I wanted to recoup more drive space for Linux but couldn’t load gparted to allocate more space because windows kept trying to claim that space.

    Even though this wasn’t the exact problem I was looking to solve it did work for me.

    When you go to install Bazzite can you make a separate EFI partition manually for it? Because that’s the part that I think might help you.

    If not, can you back up the windows install and reformat?





  • Honestly? It’ll probably be an amalgamation of different tech to do it. That’s at least part of the reason I’m not sure it should work. Using identity to certify age or age gate products in this way when so much data is being collected already about users kind of doesn’t make sense in and of itself. It either leads to a database of data that’s dangerous to store, or it leads to government entities using such services to spy on people. Or both.

    If the data that’s already out there about me being collected by data brokers can’t prove what age I am (and it absolutely can even when it’s anonymized) then I suspect no other system by itself will work. Because really what were talking about here is four things.

    1. Linking access to age verification.
    2. Linking identity to age verification.
    3. Anonymizing that data so the service/or anyone with access can’t store it or use it for anything other than age verification.
    4. Verifying that the person who device/token/certificate/verified medium is linked to is the person using the device.

    So, say you were to use the block chain method. And say the device was verified. How would I verify it’s me using the device (me being the person who certified their age via block chain or some other method). What prevents me from unlocking the device and handing it to my kid? What prevents my kid from using the device without my knowledge (circumventing the password etc).

    That’s at least part of the reason Roblox want to use facial recognition to verify users. But how often are we doing that check? Once isn’t enough. It’s not a hard barrier to cross. And say it’s twice, three times. Once a week. Say you use AI generated pictures to bypass that. Then Roblox or the service they contract with for verification has to maintain a database and compare pictures to each other etc.

    Databases can be hacked. That information can be stolen. And linked to driver’s licenses, used for reverse image searches etc. If you or your child has ever posted a picture to the internet etc that can be used against you or your kid. It could be used to verify further accounts outside your control etc.

    Following this to it’s logical conclusion you’d need to use a combination of things. Something you have (yubikee or some kind of authenticator, ID, credit card). There’s nothing stopping a person from selling this with the account credentials.

    Something you know (password, passphrase etc). The account credentials to be sold.

    Something you can’t change about yourself (iris scan, fingerprint, voice clip, etc). The dangerous to store information that when leaked or breached would cause damage to the life of the user in question.

    Someone somewhere is going to need to keep a record of that to prove you are you which means it can’t by design be anonymous. And it means that there’s a database and it there that’s dangerous to the users but had to be maintained for the purpose of authentication. And that’s why this doesn’t work.






  • “We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing that humanity needs to learn to accept AI as the “new equilibrium” of human nature. (As WC points out, there’s actually growing evidence that AI harms human cognitive ability.)

    Going on, Nadella said that we now know enough about “riding the exponentials of model capabilities” as well as managing AI’s “‘jagged’ edges” to allow us to “get value of AI in the real world.”

    “Ultimately, the most meaningful measure of progress is the outcomes for each of us,” the CEO concludes, in an impressive deluge of corporate-speak that may or may not itself be AI-generated. “It will be a messy process of discovery, like all technology and product development always is.”

    TLDR: That’s not what he said and rehashing the same interview in article after article with this frankly clickbait headline is getting old.

    Fuck Nadella and his AI bullshit, but could we not keep rehashing this?



  • I’ve never been comfortable with ring cameras specifically because even if it isn’t a tool to be harnessed by the state it’s still a tool to be harnessed by anyone holding a grudge. The vast majority of IoT users don’t know the basics of securing their network or their cameras. They connect things to the internet for the convenience and that’s it. And the cameras pick up the comings and goings of people who don’t really have the ability to not consent to having someone record when they leave their house or return to it. My neighbor doesn’t need that information. And why yes they could sit in their house and watch at all hours through the curtains, there would still be a physical limit to what they could see.

    For the same reason I don’t want drones constantly surveiling my home, I don’t want camera footage I have no access to but that can be used against me by someone who doesn’t like how I rake the leaves in my driveway.

    Anyone who’s been in a dispute with a neighbor who’s got a ring camera knows this struggle. And the advice you get, by and large is to get one of your own. No thanks.


  • My main concerns are mostly to do with the fact that Google in my experience has always had the benefit of enticing software and services that are extremely invasive but also very convenient (even if we remove IoT from the table for a moment). This is mostly due to how invasive Google Play Services is, and how invasive the Google app has been since the first iterations of Google Assistant (Google Now). I’m concerned that even those of us who have done what we can to turn off Gemini and not use Generative AI are still compromised regardless because big tech has a choke hold on the services we use.

    So I suppose I’m trying to understand what the differences are in how these two types of technology compromise cyber security.



  • Pre-Generative AI, lots of companies had AI/Algorithmic tools that posed a risk to personal cyber security (Google’s Assistant and Apple’s Siri, MS’s Cortana etc).

    Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?

    Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.


  • Military bases are often powered by renewables at least partly and they are making the shift away from relying on civilian infrastructure because it is so vulnerable. This article is more about reliance on American Tech Companies than it is about making the case that the data centers of these corps are pretty synonymous with military bases because of how they use civilian infrastructure and cost tax payers money (which I think was the point of the title but still am not sure after reading most of the article).