• 4 Posts
  • 835 Comments
Joined 3 years ago
cake
Cake day: August 5th, 2023

help-circle
  • The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.

    But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.

    Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.

    Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.


  • I have a question. What if it’s not just at a parenting level. What if it’s also at a school in level? Because I think at least partially there is a disconnect between media and internet literacy and people of all ages including children and parents.

    I think we’re going to need such skills going forward and that there exist places in the world where students are being taught such things and are benefiting from them significantly.

    Yet the immediate knee jerk reactions seem to be blame the parents and blame the companies that facilitate the access to the content.

    It doesn’t have to be a parents by themselves against the world system. But it also can’t just be a “companies protecting the children” system because that’s not what companies do or are for? The need to maintain a profit margin flies directly in the face of the aim to hold companies responsible and the laws seem to be intent on capping the monetary consequences of a breach of the law.

    I do feel that the least a parent should be required to do before complaining to a governing body that they find someone else is “harming” their child is to show that they have done their due diligence to protect said child. We punish parents for willful negligence and child endangerment all the time. I don’t understand why this is different but I also wonder if there are other options for educating both children and adults that could help the situation significantly.



  • While I agree that your situation isn’t an edge case (I found dads locked porn collection of VHS tapes and learned that that lock could be circumvented with a fridge magnet) at the age of 9?

    But on the other hand, let’s say you post something to the internet that may be considered not okay for children. And let’s say that thing is about gun powder (which you absolutely can make from foraging natural ingredients). It’s your personal website, it’s labeled as not intended for children and you aren’t a big company so you don’t have the ability to just hire another company for things like age verification.

    Then you get sued by a regulatory body in another country because you didn’t adhere to their laws? Does that sound reasonable to you?

    If a parent or guardian is taking every precaution to keep their kid safe that is reasonable within the law and that kid still gains access to something that can harm them that’s an accident. If the parent takes no precautions and allows their child that they are legally responsible for the well being and safety of to raw dog life with no precautions whatsoever because that’s too hard, or they don’t care or whatever, then it seems reasonable to me that they be held responsible under the law.

    Their right to have a third party protect their children ends at my right to privacy which to me extends to my right to anonymity specifically because it has already been shown that without anonymity privacy just doesn’t exist in this age of the internet.

    What does that mean? It means that companies that collect your data but promise “privacy” cannot be trusted to uphold that promise, which means the only option left is to be as anonymous as possible.

    I want you to understand that I do agree that when one kid figures out the loophole, that loophole spreads like wild fire.

    But on the other hand, if a child figured out how to turn off the security system to the family car, grabbed the keys and went for a joyride with their friends, is it the fault of the parents or the fault of the car manufacturer? Because one of them is legally liable under the law.

    Would it be acceptable to have to send your thumbprint to BMW every time you wanted to drive your car?















  • During a conversation with my sister about going back to school to finish her electrical engineering degree she basically said this:

    As an older student. What /I/ see is a bunch of students who don’t really know how the workforce works, but who HAVE grasped these facts:

    1. The school wants you to have a social life. That’s why there’s all these events, and they cancel class for games and stuff sometimes. So you SHOULD be doing social stuff.
    1. It’s a dog-eat-dog world, and the only way to get ahead is to use every tool at your disposal, including and especially GenAI.
    1. Everybody is doing it, especially the smart people, including the grad students, AND sometimes even the teachers. Even the professionals are like “this is how you can use ChatGPT!”
    1. So, to get everything done and have A Good College Experience, obviously you just use ChatGPT! Why spend hours in the library like all those losers???

    And, then, on the other hand, I see:

    A bunch of very tired, overworked, overwhelmed teachers who are doing their damndest with students who can’t do even half of the bare minimum that was expected when they went through college.

    Bending over backwards and then back upwards to pass these kids, by hook or by crook.

    Like, people giving 100s of points of extra credit.

    People putting together a whole website of specific terms for each unit AND the slide shows from the classes, quizzes that are pass/fail - as in, you get the credit if you take the quiz on time, period, regardless of what you get, so you can use it to review, AND the weekly essays are either:

    1. 750 word summary of the reading + answer 4 questions (no word limit) + pose two questions about the text, no specifc formatting required.

    OR

    1. 250 word reflection: how does what we learned apply to your life? No specific format required.

    I was writing 5 page research papers for classes my first year of college. What the fuck. What the fuck.

    (She went back to school to finish her degree)

    You’re telling me you can’t write a 750 word SUMMARY that doesn’t need to be formatted beyond “put your name on it and use punctuation”???

    The classes themselves are not hard. IF you have a good grounding in logic and math and writing. Which these people DO NOT.

    They have a good grounding in letting ChatGPT do the work for them and letting the teacher tell them how to do everything.

    And goddamn I might get 77s because I can’t do algebra and I can’t do math fast enough to finish a test.

    But at least I’m not failing because I can’t THINK.

    She also mentioned that a lot of professors are kind of trying to walk the thin line between failing students (who will then go to places like "ratemyprofessor.com and leave what essentially amount to bad reviews which can threaten their employment), and passing students who aren’t actually grasping the basics and I think social media is just compounding the problem because of that.

    Imagine working in fast food and already getting complaints all the time and then having to worry about someone putting you on a rate my server website where they trash talk you and you have no recourse to have that information taken down.

    At least with yelp it’s not first and last names and it’s the business that takes the flak.



  • I think the simple fact is that some of the people in this thread don’t understand is that the people they’re asking to vet the code don’t know how.

    They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don’t know that most of the people in opposition to their comments understand that context.

    I haven’t coded anything since the 90’s. I know HTML and basic CSS and that’s it. I wouldn’t have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I’d still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn’t know where to start to make a program. It’s not part of my skill set.

    Most users are like that. They engage with only parts of a thing. It’s why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.

    It’d be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn’t retract. A lot of people wouldn’t know where to start.

    I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.

    But the way I see it there’s two different groups and they have very different views of this situation.

    The people who can’t code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.

    If those people choose to boycott, it’ll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.

    The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there’s something wrong.

    I suppose there’s a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.

    Humans certainly aren’t infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren’t going to make something up unless they have an ulterior motive.

    Perhaps breaking things down into tiny chunks makes AI better or it’s outputs more usable. Maybe there’s a 'sweet spot".

    But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that’s dangerous for many reasons.

    This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.

    I know that from experience. So in this case does the AI have more potential to help or do harm?

    There’s a lot to this. I have not personally used Lutris, but before this happened I wouldn’t have thought twice about saying that I’ve heard good things about it if someone asked me for a Heroic launcher style software for Linux.

    But just like the Ladybird fork of Firefox I don’t know that I feel comfortable suggesting it if this is the state of things. For the same reason I don’t currently feel comfortable recommending Windows 11 or Chrome.

    There are so many sensitive things that OS’s, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn’t swimming in a big pond of sensitive information but it is running on people’s hardware and they should have both the right to be informed and the right to choose.