One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
I don’t see any of the people celebrating this decision discussing this? Perhaps it’s a misrepresentation by the author since I can’t find the actual decision text.
This is going to harm small non-corporate websites, not just social media, far more than Facebook or Tiktok. Harmful content is also going to include stuff like LGBTQ, especially anything trans related, and ‘antisemitism’ (but probably not antisemitism.)
The quote is from New Mexico AG Torrez.
https://nmdoj.gov/press-release/new-mexico-department-of-justice-wins-landmark-verdict-against-meta/
Local wannabe crack dealer Mike Masnick says crack isn’t harmful, life without it would be boring. More at 11
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This feels like an awful argument to make. It’s not the presence of those things that make Meta and co so shit, it’s the fact that they provably understood the risks and the effects that their design was having, knew that it was harming people, and continued to do it anyway. I don’t care if we’re talking about a little forum run by a Grandma and Grandpa talking about their jam recipes; if they know that they’re causing harm and don’t change their behavior, they should be liable.
The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.
But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.
Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.
Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.
“We designed, marketed, and sold the gun, but we didn’t think anyone would use it.”
It’s like if someone had a forum where insurrectionists were discussing how to build bombs and where they were going to use them, and the owners had an internal meeting where they said, “Hey, we’re hosting some pretty awful people, should we maybe report them or shut this down?” and the answer was, “Nah, they’re paying users, and we want their money.”
Pretty sure Section 230 wouldn’t protect them, either.
Yeah this feels very much like, “censor content, but don’t change Meta’s practices”
Which begs the question, does the author know what they’re cheering for?
You can bet they do.
It’s like he’s describing a slot machine with unpainted wheels, leaving out the context that it’s in a casino with a big “paint me and enjoy a share of the profit” sign above it.
The social media machine was designed to be a self-serve addiction generator. It intentionally used every trick it could legally get away with.
Also they can now generate content without users, which they already do a lot on Facebook.
I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.
Addictive Personality is a proposed set of traits that makes sufferers more vulnerable to developing addictive behaviors, including things like gambling or social media. Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Telling those people to just have self control is like telling someone with depression to just stop being sad.
Or telling someone stupid to be more clever, as the case may be.
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Bull fucking shit. This is not about platforms being held responsible for user content. This is about adding points and badges and achievements and all kinds of things designed to reward engagement with dopamine.
The author’s example of all content being drying paint would absolutely be addictive if the platform added an achievement for watching 10 different colours. Or: Congratulations, you’ve watched paint dry for 100 hours! As a reward, you get a new fancy emote! THAT is what these platforms do, and that is what is addictive. And that is what they’ve been convicted for.
Is not a loophole to get around section 230 as the author claims.
I’m not disagreeing with you when I say this; I just am not on social media other than lemmy and YouTube at this point so, I am out of the loop. What are these sites doing that gamifies watching content? I get all the other crap for posting content like likes and views. It incentivises content producers. How are viewers getting “likes and views” equivalent on Facebook?
Let’s not forget the years of literal psychological experiments that Meta conducted on its users to find out exactly what factors led to higher engagement.
This isn’t a simple message board. This is a highly-engineered, personalized content delivery system with the goal of serving as many ads as possible.
Surprise surprise. If you go through Techdirt’s archives, you can see Mike Masnick has spent thousands of words losing his shit any and every time Facebook has faced ANY criticism. I don’t know if he has a financial interest in them (like he does with Bluesky) but the moment someone suggests reining them in, here comes Masnick to defend one of the richest, most lawyer-ed up companies around.
Mike Masnick is on the Bluesky board of directors. Could this position be affecting his judgment on this specifically? because usually I expect Techdirt and Mike himself to be much more reasonable.
Yes, of course. Bluesky is also social media and so the precedent set by these cases will apply to it. Besides, knowledge of a subject does tend to affect your judgment.
Bluesky is also social media
So is Lemmy…
Yes, but everyone on Lemmy knows that the law only applies to the bad guys.
I was wondering the other day if Lemmy or Bluesky have any algorithms that are actively trying to keep users engaged?
Cool thing about Lemmy is you can just read the code and find out
IIRC somewhere they also explained it in plain English what the sorting methods do. My layman brain thinks that’s a kind of algorithm.
Kindly correct this layman if I’m misunderstanding :)
I’m also a layman, but I have read some discussions about this exact comparison. Essentially, the big mainstream sites often have personalized algorithms for each user that learns and adapts specifically to the user to feed the user whatever junk food content it can to keep them engaged. Algorithms on things like base lemmy or maybe reddit in the past just have a sort function like excel that propped up posts with more likes or more comments. You can see what other people are interested in, but it’s not targeting YOU. The predatory targeting algorithm can put a person into a self fulfilling echo chamber that in some ways resembles psychosis. This could naturally evolve into actual psychosis for individuals. I think the old verbage of “touch grass” was the prescription for fighting the effects. I think it’s a lot harder to “touch grass” when people are increasingly online and have fewer and fewer avenues to get out of their own echo chamber while staying online almost exclusively. I’m not an expert and the people I got this info from have no credentials I can source, but the logic seems sound to me. Anyone else with better credentials should weigh in if I’m wrong.
The Internet went from globalizing us to partitioning us pretty suddenly, and I think we are seeing the effects now.
Normally, I am all for Techdirt’s takes. But I think this one is off the mark a bit, because I legitimately think that infinite scroll and auto play are insidious, and actually harmful enough to be treated as a dangerous design decision.
The whole point of Section 230 is that communications companies can’t be held responsible for harmful things that people transmit on their networks, because it’s the people transmitting those harmful things that are actually at fault. And that would be reasonable in the initial stages of the Internet, when people posted on bulletin boards (or even early social media) and the harmful content had a much smaller reach. People had to “opt in”, essentially, to be exposed to this content, and if they stumble on something they find objectionable they can easily change their focus
But the purpose of the infinite scroll and auto play is to get people hooked on content. The algorithms exist to maximize engagement, regardless of the value of that engagement. I think the comparison to cigarettes is particularly apt. They are looking to hook people into actively harmful behaviors, for profit. And the algorithms don’t really differentiate between good engagement and harmful engagement. Anything that attracts the users attention is fair game.
The author’s points regarding how these rulings can be abused are correct, but that doesn’t negate how fundamentally harmful these addictive practices are. It will be up to lawmakers to make sure that the laws are drafted in such a way that they can be applied equitably… (So maybe we’re screwed after all…)
This guy has an addiction lol ironic
In truth this is part and parcel of age controls as an excuse to id everyone.
“For the children” tech laws should all be abolished. Why should I be burdened because you can’t be bothered to raise your own damned kids properly?
You’re right, because kids have been shown to listen to the parents all the time and have never had problems handling adult situations when their parents aren’t around 100% of the time. Even amazing parents raise kids who do stupid shit. And once these amazing parents aren’t around their kid 100% of the time, those kids are still kids and will make bad decisions. This is especially true when it is something that literally every person around them is doing (adults, kids, friends, celebrities).
Sure you are correct that parents can’t be there hovering at every moment to correct their kid everytime they make a mistake. At this point, it is easier to put controls that actually work on any internet connected device that you give them than any shenanigans that could get up to outside of supervision. Give them a a tablet with parental controls. It will be a better control than when they go to the corner and buy drugs or whatever is the real life equivalent. It’s never been easier for a parent to control their child’s online consumption than now and it will only get better. The offline risks aren’t really changing the same way.
We all did dumb shit as kids, but tech wasnt anywhere near what it is now.
These platforms need to be punished and held to account for the pervasive technology they have designed for profit, these things (FB, insta. Tiktok etc) shouldnt be able to exist in the first place in their current state. There were no guard rails put in place - just like the flood of AI, technology moves so much quicker than legislation can keep up and companys do really shitty things with that.
I believe it starts at a parenting level, but it’s much more difficult to manage these days compared to 20 years ago. Age verification bullshit is not the answer but parents need to be given some form of help againt these fuckers and their incredibly easy to access addiction machines.
I have a question. What if it’s not just at a parenting level. What if it’s also at a school in level? Because I think at least partially there is a disconnect between media and internet literacy and people of all ages including children and parents.
I think we’re going to need such skills going forward and that there exist places in the world where students are being taught such things and are benefiting from them significantly.
Yet the immediate knee jerk reactions seem to be blame the parents and blame the companies that facilitate the access to the content.
It doesn’t have to be a parents by themselves against the world system. But it also can’t just be a “companies protecting the children” system because that’s not what companies do or are for? The need to maintain a profit margin flies directly in the face of the aim to hold companies responsible and the laws seem to be intent on capping the monetary consequences of a breach of the law.
I do feel that the least a parent should be required to do before complaining to a governing body that they find someone else is “harming” their child is to show that they have done their due diligence to protect said child. We punish parents for willful negligence and child endangerment all the time. I don’t understand why this is different but I also wonder if there are other options for educating both children and adults that could help the situation significantly.
deleted by creator
deleted by creator
deleted by creator
I guess in response to your last paragraph, the issue is the predatory nature of the attention addiction machines these companies make.
You could compare it to a child that got in to a van that had “free candy” written on the side. The door was open, if you assume someone was standing next to the van asking the kid to get in, that would be advertising. Now the kid gets abducted. Their “attention” is held hostage in the case of social media etc.
Now, would the parents have had to tell the kids to not get in to a van with free candy written on it for them to be able to report it to the police? Bad luck otherwise? Now what if every month a new van rocks up with more bells and whistles, its a different colour, its got flames down the side, whatever - the point is its different and cool and more appealing each time. More kids go missing. The “predators” have figured out what makes these kids tick and what makes them more likely to get in the van every time.
It’s a bit of an out there and confronting comparison but really, these companies are praying on your mental instead of your physical, which apparently is free game. They are still predators.
They know the harm their platforms cause, they suppress studies that report that harm, they cover it up, they fight tooth and nail and spend millions lobbying government to let them continue to do it.
Back on track sorry, schools are also responsible but you run in to the same issues once companies start targetting school kids like google did with chromebooks - the shittest PCs sold at a loss just so they could attempt to hook the younger generation in to their ecosystem of surveilance and advertising early.
Companies will NEVER protect the children. They will only ever protect shareholders, profits and their pedo CEOs.
Real change will only ever come from real (not sponsored) education, government legislation that isnt bullshit (I dont know what this would look like but ID checking isnt it) and holding the tech bros increasingly accountable for their fucked up apps.
Apologies I had a technical difficulty and posted the same comment several times.
I think you make some good points here, but just for context, I do think that there is a level of responsibility on the parents here in combination with the companies. There’s plenty of “online literacy” classes that I think would be appropriate for adolescent education. I’m the unfortunate benefactor of needing to master cursive as a class one year and then typing the next year. Schools would be more beneficial if they included teaching kids internet literacy. They can probably drop some of the old stuff. They also don’t teach several other things like financial literacy in many situations (despite heavy capitalist leanings in real life). The education system sucks, but that is not an excuse to let iPad kids control my freedoms, and the root cause for age verification has never been about protecting children in the first place.
Kids should be banned from the platforms.
But that requires the tools to do so. And then we are back at checking on ages and identities.
This is probably an extreme take, but kids shouldn’t be anywhere near a tablet while they’re still really young especially.
It kind of a tough balance. Yes, unrestricted tech use is an issue for young children but on the other hand using tech while young is the best way to make it a natural part of your experience of the world and tech isn’t going away. If you go to the other extreme and say no whatever sort of tech period before a certain age are you setting the kid back against more tech literate peers? There’s also the consideration that’s been discussed around alcohol forever. By making it an “adult thing” and effectively a rite of passage to drink alcohol do you cause more problems and abuse in young adults than if it was always a part of their experience and the focus was on responsible use instead of total abstinence?
I completely agree. It would be amazing if we could nationally or even globally enforce age restrictions to give an internet kiddy pool to let children learn and grow in a safe online environment. We live in a time where the people who are pushing this in the government should not be trusted to use that information for the real reason. “For the kids” is all made up and not helping kids. Giving up privacy in order to not help kids really highlights how corrupt the people pushing for this are.
The author reads like he doesn’t understand context or the legal idea of a rational actor. What users are going to purposefully upload boring content?
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.











