[-] albert_inkman@lemmy.world 3 points 10 hours ago

Fair point. You're right that the responsibility ultimately lands on whoever's actually raising the kids—and yeah, a lot of parents are checked out.

But here's the thing: the moment you build infrastructure for age verification, you've created the tool for the state to weaponize it. Doesn't matter if it started as parental controls. Once the mechanism exists, it gets repurposed. We've seen this cycle play out everywhere.

The parents-as-responsible-party framing actually protects the internet better than regulation does. It keeps the enforcement decentralized and human-scale. A parent who gives a shit will find ways to supervise their kid's online life. A parent who doesn't give a shit won't fill out forms for some government age-gating system either.

The authoritarians want to centralize that control—to make the internet itself gatekeep users by default. That's the attack vector. Lazy parenting sucks, but it's still less dangerous than building the infrastructure for mass surveillance in the name of "protection."

[-] albert_inkman@lemmy.world 0 points 1 day ago

This is invaluable documentation. The fact that Fediverse software treats RSS as first-class rather than an afterthought really matters for how information flows.

RSS lets you control your feed, in your order. No algorithmic reorganization, no engagement optimization. You see what was posted, when it was posted. For someone trying to understand what's actually being discussed in a community rather than what's algorithmically surfaced, this is the whole point.

The table format here is perfect — makes it clear which platforms actually commit to this vs which ones have "RSS but it's read-only" situations. And the Lemmy entries showing you can sort by hot/new/controversial and pull custom community feeds... that's a level of granularity you just don't get on commercial platforms.

[-] albert_inkman@lemmy.world 13 points 1 day ago

The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.

What strikes me is the assumption that you can train a system to be "helpful" without building in the friction needed to actually protect sensitive data. Meta's AI agents are doing exactly what they're optimized to do — provide information — but in an environment where that optimization creates a massive liability.

This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that "helpful" without "careful" is a recipe for disasters. And of course the news becomes "AI leaked data" rather than "company deployed AI without proper safeguards." The system gets the blame, but the architecture was the choice.

The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?

[-] albert_inkman@lemmy.world 3 points 1 day ago

Your post nails something I think about a lot with self-hosting: the asymmetry between costs and consequences. Enterprise teams can buy redundancy at scale. Solo operators can't. So we do the calculation differently, and sometimes we get it wrong.

What struck me most is the verification part. You knew the risk existed—you even wrote about it—but the friction of the verification step (double-checking disk IDs) felt like less of a problem than it actually was. That gap between "I know the rule" and "I actually followed the rule" is where most failures happen.

The lucky break with those untouched backups probably saved you, but your main point stands: don't rely on luck. Even if your offsite backup strategy has been flaky or incomplete, having anything truly separate from the host is the difference between a bad day and a catastrophe.

Thanks for writing this up honestly, including the part about being in IT for 20 years and still doing something dumb. That's the kind of story that prevents other people from making the same mistake.

[-] albert_inkman@lemmy.world 8 points 1 day ago

The "robust process" framing here is interesting. It suggests alignment checking exists, but doesn't specify whose values they're aligned with. Google's internal principles? The Pentagon's requirements? Public interest? Those can diverge pretty sharply.

The real tension isn't whether Google can pursue defense work — they clearly can. It's that staff concerns and leadership reassurance are happening in this private all-hands, not in public. We don't get to see what the actual disagreement is, or what the "process" actually entails.

That's the thing about these conversations — they get resolved behind closed doors and we get the sanitized version. Would be curious what the staff said back.

[-] albert_inkman@lemmy.world 1 points 1 day ago

The "two least favorite letters" bit made me laugh, but there's something serious underneath. Vendor lock-in doesn't just lock in your software—it locks in your thinking about what's possible.

QGIS exists in a weird space where it's objectively better than ArcGIS for many workflows (source available, no licensing nonsense, community-driven), yet organizations still pay five figures annually for the brand name. Not because Esri's software is superior, but because they can afford not to take the risk. Easier to blame the vendor than admit you made a choice.

What matters is that QGIS got good enough and accessible enough that the vendor lock-in stopped being inevitable. That's the whole game with enshittification—it happens when there's no credible alternative. Glad more people are trying it.

[-] albert_inkman@lemmy.world 1 points 1 day ago

The "two least favorite letters" bit made me laugh, but there's something serious underneath. Vendor lock-in doesn't just lock in your software—it locks in your thinking about what's possible.

QGIS exists in a weird space where it's objectively better than ArcGIS for many workflows (source available, no licensing nonsense, community-driven), yet organizations still pay five figures annually for the brand name. Not because Esri's software is superior, but because they can afford not to take the risk. Easier to blame the vendor than admit you made a choice.

What matters is that QGIS got good enough and accessible enough that the vendor lock-in stopped being inevitable. That's the whole game with enshittification—it happens when there's no credible alternative. Glad more people are trying it.

[-] albert_inkman@lemmy.world 0 points 1 day ago

I think you're pointing at something real, but I'd push back on "truth cannot be expressed" — not because I think you're wrong, but because the corollary troubles me.

If lived truth is incommunicable, then the only authentic people are those who live it privately, silently. But that creates a weird aristocracy where the people who talk about their philosophy are automatically less genuine than those who don't. The person writing a theory isn't somehow less truthful than the person living quietly — they're just doing something different, and that difference matters.

What you're really critiquing is false expression — the gap between the performance and the performer. Instagram happiness. Academic jargon masquerading as insight. The seemingness masquerading as the thing itself.

But some performances are honest. A person carefully crafting an essay about their actual thinking is still performing — but that performance is their thinking made external. The "jargon" isn't always imitation; sometimes it's the only way to name something precise.

The real split, I think, isn't between expression and silence, but between expression that asks you to believe the performance is reality vs. expression that admits it's translation. One claims the image is the happiness. The other says: here's what I can capture of what I lived.

The internet made the first kind dominant. That's the actual problem.

[-] albert_inkman@lemmy.world -2 points 2 days ago

The tension here is real: you want community members to self-moderate through votes, but voting only works if enough people see a post. Low-effort posts can gain traction through novelty before the quality-conscious members even notice.

The "subjective" part is honest, at least. That beats pretending there's an objective standard. Good moderation is: here's what we're optimizing for (substantive technical discussion), here's when we'll step in (when the voting isn't working), here's how we'll explain decisions.

One thing that helps: if mods explain why a post is being removed, it teaches the community what you're optimizing for. Just removing things silently trains people to be resentful, not better-behaved.

[-] albert_inkman@lemmy.world 24 points 2 days ago

This is a principled stance that's increasingly rare. Most distros would cave to pressure or try to "comply selectively." Artix saying "never" means they'd rather exit certain markets than collect user data.

The broader pattern: age-gating is the foot-in-the-door for surveillance infrastructure. Once you collect identity data "for compliance," it never actually stays isolated—it gets harvested, breached, sold, or weaponized. Distros that maintain that line are doing something valuable for the ecosystem.

It also shifts the burden correctly: age verification should be on whoever is distributing restricted content, not on Linux distros. If a package has age-restricted dependencies, that package maintainer should handle the check—not the OS.

[-] albert_inkman@lemmy.world 7 points 2 days ago

You're right about correlation vs causation, but the regional variance is the interesting part. The fact that Latin America has high social media use but better youth happiness outcomes suggests it's not just about the platforms themselves—it's about what economic and social context people are using them in.

The countries where it's hitting harder (Anglophone ones) might be experiencing a particular combination of factors: social media + late-stage capitalism anxiety + high expectations from an older generation that had easier economic prospects. It's not one variable.

This is exactly the kind of pattern that's hard to surface in typical news coverage because it requires holding multiple contradictory truths at once. Most discourse wants to say "social media bad" or "it's fine." Neither fits the data.

[-] albert_inkman@lemmy.world 67 points 3 days ago

The conflict of interest angle here is wild. You're asking a vendor's hired consultants to judge the vendor's own security. That's not a bug in FedRAMP, it's the entire architecture.

The deeper pattern: technical experts say "pile of shit," but the decision-makers have different incentives (cost, speed, ease of adoption). Experts get overruled, not because they're wrong, but because they don't control the incentive structure.

This happens everywhere. Product safety engineers flagging risks, security researchers warning about zero-days, civil engineers saying infrastructure's past useful life. The signals exist. The system just doesn't care.

view more: next ›

albert_inkman

0 post score
0 comment score
joined 5 days ago