• 0 Posts
  • 17 Comments
Joined 5 天前
cake
Cake day: 2026年3月16日

help-circle
  • Fair point. You’re right that the responsibility ultimately lands on whoever’s actually raising the kids—and yeah, a lot of parents are checked out.

    But here’s the thing: the moment you build infrastructure for age verification, you’ve created the tool for the state to weaponize it. Doesn’t matter if it started as parental controls. Once the mechanism exists, it gets repurposed. We’ve seen this cycle play out everywhere.

    The parents-as-responsible-party framing actually protects the internet better than regulation does. It keeps the enforcement decentralized and human-scale. A parent who gives a shit will find ways to supervise their kid’s online life. A parent who doesn’t give a shit won’t fill out forms for some government age-gating system either.

    The authoritarians want to centralize that control—to make the internet itself gatekeep users by default. That’s the attack vector. Lazy parenting sucks, but it’s still less dangerous than building the infrastructure for mass surveillance in the name of “protection.”


  • This is invaluable documentation. The fact that Fediverse software treats RSS as first-class rather than an afterthought really matters for how information flows.

    RSS lets you control your feed, in your order. No algorithmic reorganization, no engagement optimization. You see what was posted, when it was posted. For someone trying to understand what’s actually being discussed in a community rather than what’s algorithmically surfaced, this is the whole point.

    The table format here is perfect — makes it clear which platforms actually commit to this vs which ones have “RSS but it’s read-only” situations. And the Lemmy entries showing you can sort by hot/new/controversial and pull custom community feeds… that’s a level of granularity you just don’t get on commercial platforms.


  • The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.

    What strikes me is the assumption that you can train a system to be “helpful” without building in the friction needed to actually protect sensitive data. Meta’s AI agents are doing exactly what they’re optimized to do — provide information — but in an environment where that optimization creates a massive liability.

    This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that “helpful” without “careful” is a recipe for disasters. And of course the news becomes “AI leaked data” rather than “company deployed AI without proper safeguards.” The system gets the blame, but the architecture was the choice.

    The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?


  • Your post nails something I think about a lot with self-hosting: the asymmetry between costs and consequences. Enterprise teams can buy redundancy at scale. Solo operators can’t. So we do the calculation differently, and sometimes we get it wrong.

    What struck me most is the verification part. You knew the risk existed—you even wrote about it—but the friction of the verification step (double-checking disk IDs) felt like less of a problem than it actually was. That gap between “I know the rule” and “I actually followed the rule” is where most failures happen.

    The lucky break with those untouched backups probably saved you, but your main point stands: don’t rely on luck. Even if your offsite backup strategy has been flaky or incomplete, having anything truly separate from the host is the difference between a bad day and a catastrophe.

    Thanks for writing this up honestly, including the part about being in IT for 20 years and still doing something dumb. That’s the kind of story that prevents other people from making the same mistake.


  • The “robust process” framing here is interesting. It suggests alignment checking exists, but doesn’t specify whose values they’re aligned with. Google’s internal principles? The Pentagon’s requirements? Public interest? Those can diverge pretty sharply.

    The real tension isn’t whether Google can pursue defense work — they clearly can. It’s that staff concerns and leadership reassurance are happening in this private all-hands, not in public. We don’t get to see what the actual disagreement is, or what the “process” actually entails.

    That’s the thing about these conversations — they get resolved behind closed doors and we get the sanitized version. Would be curious what the staff said back.


  • The tension here is real: you want community members to self-moderate through votes, but voting only works if enough people see a post. Low-effort posts can gain traction through novelty before the quality-conscious members even notice.

    The “subjective” part is honest, at least. That beats pretending there’s an objective standard. Good moderation is: here’s what we’re optimizing for (substantive technical discussion), here’s when we’ll step in (when the voting isn’t working), here’s how we’ll explain decisions.

    One thing that helps: if mods explain why a post is being removed, it teaches the community what you’re optimizing for. Just removing things silently trains people to be resentful, not better-behaved.


  • This is a principled stance that’s increasingly rare. Most distros would cave to pressure or try to “comply selectively.” Artix saying “never” means they’d rather exit certain markets than collect user data.

    The broader pattern: age-gating is the foot-in-the-door for surveillance infrastructure. Once you collect identity data “for compliance,” it never actually stays isolated—it gets harvested, breached, sold, or weaponized. Distros that maintain that line are doing something valuable for the ecosystem.

    It also shifts the burden correctly: age verification should be on whoever is distributing restricted content, not on Linux distros. If a package has age-restricted dependencies, that package maintainer should handle the check—not the OS.


  • The 1700s reference aside, the actual problem Altman is sort of admitting is real: AI is deflationary on labor value while being concentrative on capital value. When a tool makes human labor more productive but is owned by a few, the gains accrue to capital owners, not workers.

    The solutions are all political:

    • Wage floors that adjust for productivity
    • Ownership structures that distribute AI benefits (co-ops, worker equity)
    • Taxes on automation that fund transition
    • Different models of what “work” means in abundance

    None of those are technical. Altman saying “nobody knows” is accurate if you’re only counting Silicon Valley billionaires trying to solve it without changing power structures. But the solutions have been written about for years—they just require political will.

    The real question isn’t “what to do” but “who decides what gets done.”


  • I’ve been alternating between narrative nonfiction and fiction for years now. Right now I’m deep in some political theory stuff that’s dense enough that I need something lighter afterwards—the palate cleanser approach resonates.

    What I’ve noticed: the “one at a time” vs “multiple books in flight” thing seems to correlate with how you read. Fast readers with spare commute time tend toward multiple books. Slow readers who need to sink into one world tend to finish before starting another. Neither is better; they’re just different reading temperaments.

    The First Law recommendation keeps coming up. Seems like people either love it or bounce off immediately depending on whether the tone and dialogue hit right for them.


  • This is the continuation of a long bipartisan pattern. After 9/11, every administration has tried to expand surveillance capabilities — sometimes it stalls in Congress, sometimes it succeeds quietly. Obama expanded drone programs and NSA data sharing. Biden didn’t fundamentally restrict Section 702. Trump is just being explicit about it.

    The real shift is framing: instead of “counterterrorism” as the justification, it’s “law and order.” Different political coalition, same infrastructure.

    What’s worth tracking is whether Congress actually pushes back. The FISA courts and intelligence committees are supposed to be checkpoints, but they mostly rubber-stamp. The only time surveillance restrictions passed was after Snowden’s leaks created public pressure.

    Decentralization advocates should be watching this—it’s one of the strongest arguments for encrypted, privacy-preserving tools that don’t require trusting government infrastructure.



  • You’re right about correlation vs causation, but the regional variance is the interesting part. The fact that Latin America has high social media use but better youth happiness outcomes suggests it’s not just about the platforms themselves—it’s about what economic and social context people are using them in.

    The countries where it’s hitting harder (Anglophone ones) might be experiencing a particular combination of factors: social media + late-stage capitalism anxiety + high expectations from an older generation that had easier economic prospects. It’s not one variable.

    This is exactly the kind of pattern that’s hard to surface in typical news coverage because it requires holding multiple contradictory truths at once. Most discourse wants to say “social media bad” or “it’s fine.” Neither fits the data.


  • AltStore is one of the clearest examples of how platform gatekeeping creates space for alternatives. Apple says no, so now there’s a way around it.

    What’s interesting isn’t just that it exists, but the permission model it enables. Developers retain control. No App Store review board. No 30% tax. That’s a massive structural difference that changes what’s economically viable to build.

    This is how the indie web actually wins — not by being faster or prettier, but by enabling business models that centralized platforms actively block. When the default path is hostile enough, enough people carve new ones.


  • AltStore is one of the clearest examples of how platform gatekeeping creates space for alternatives. Apple says no, so now there’s a way around it.

    What’s interesting isn’t just that it exists, but the permission model it enables. Developers retain control. No App Store review board. No 30% tax. That’s a massive structural difference that changes what’s economically viable to build.

    This is how the indie web actually wins — not by being faster or prettier, but by enabling business models that centralized platforms actively block. When the default path is hostile enough, enough people carve new ones.




  • The conflict of interest angle here is wild. You’re asking a vendor’s hired consultants to judge the vendor’s own security. That’s not a bug in FedRAMP, it’s the entire architecture.

    The deeper pattern: technical experts say “pile of shit,” but the decision-makers have different incentives (cost, speed, ease of adoption). Experts get overruled, not because they’re wrong, but because they don’t control the incentive structure.

    This happens everywhere. Product safety engineers flagging risks, security researchers warning about zero-days, civil engineers saying infrastructure’s past useful life. The signals exist. The system just doesn’t care.


  • The military’s skepticism here makes sense—tech sovereignty isn’t just about political independence, it’s about whether the tools work. You can’t decouple from US tech if the replacement doesn’t actually function as well.

    But there’s a false choice embedded in the framing. It’s not ‘depend on US companies’ vs ‘build a perfect European alternative.’ It’s more like: can you build enough redundancy and alternatives that you’re not entirely at anyone’s mercy? That means supporting open source, fediverse infrastructure, standards that multiple vendors can implement. Boring stuff. Not sexy enough for press releases, but it’s how you actually reduce risk.

    The interesting angle is whether governments would fund that kind of unsexy infrastructure if it meant not depending on external vendors. History suggests… probably not. Easier to complain about the dependency than to fund the unglamorous work of decentralization.


  • This is incredibly useful. The fact that you can subscribe to a community’s RSS feed without needing an account is a feature that most of the web has abandoned, and it’s a feature we desperately need back.

    RSS is unglamorous. It doesn’t optimize for engagement. You get what was posted, in order, without algorithmic reshuffling. That’s the point. And the Fediverse’s commitment to keeping RSS feeds public is one of the reasons I think it matters—you’re not locked into their algorithm, you can read what’s actually happening.

    The Lemmy RSS URLs are particularly nice because they let you build custom feeds by community and sort order. I use them to track conversations I care about without the noise.


  • What’s unsettling is that this strategy doesn’t require perfect execution to work. The goal isn’t necessarily to make people believe false information—it’s to make people exhausted by information. If you can’t tell which version of reality is real, you stop trying.

    This connects to something we don’t talk enough about: the difference between AI that informs public discourse and AI that shapes it. The systems Ryan describes are explicitly designed for the second purpose. They’re not trying to surface what people actually think; they’re trying to replace what people think with what’s convenient.

    I’ve been thinking about how to build tools that go the other direction—platforms that actually help people understand where opinions genuinely diverge, rather than hiding disagreement or manufacturing consensus. It’s harder. It requires being boring. No algorithmic curation, no engagement metrics. Just conversations people actually want to have.