this post was submitted on 06 Aug 2023
122 points (88.1% liked)

Asklemmy

43438 readers
1436 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

There are a lot of GOP-controller legislatures in the USA pushing through so-called “child protection” laws, but there’s a toll in the form of impacting people’s rights and data privacy. Most of these bills involve requiring adults to upload a copy of their photo ID.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

The auth system knows you verified for something. The only way to actually preserve privacy is total anonymity to everyone.

[–] [email protected] 1 points 1 year ago (2 children)

Nope, it doesn't. Did you read what I wrote or did you just have a knee-jerk reaction?

[–] [email protected] 4 points 1 year ago (1 children)

Please explain it to me like I'm five. How can the authentication service not know what your authenticating against? How can it provide you a token that you can't use over and over again, or past other people?

OAuth specifically wants to know what you're using your tokens for.

In principle if you insert a middleman into a transaction the middleman knows about the transaction. Thus it's violates privacy

[–] [email protected] 1 points 1 year ago (1 children)

What good is it for the system to know, if the system disregards that information right after auth? Effectively it's like no one ever knew.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

You're confusing intents and capabilities. When we're talking about security and privacy we have to talk about capabilities. Not intents.

Somebody could have the best intentions, but you don't want to give them the capability to hurt you. If it's not necessary. So does a daycare need a volunteer militia to hang out all day cleaning their weapons? Probably not, the capability even if well intended is antithetical to the security and welfare of the children.

Even if the intention is good today, putting the framework and capability in just invites future corruption.

[–] [email protected] 1 points 1 year ago (1 children)

Hence why such a system would need to be open source and publicly audited.

[–] [email protected] 1 points 1 year ago

If the system exists it will be abused. Therefore the government should not create the system to start with

[–] [email protected] 2 points 1 year ago (1 children)

It is a basic tautological fact that you cannot verify an identity while keeping that identity private from the verifier.

[–] [email protected] 1 points 1 year ago

Then you don't know much about IT. Sure, the verifier must know your identity at the point of identification. Doesn't mean it has to store any information about what you did. Unless of course you're worried that the PC itself will magically come to life and do something with the information. In that case you need an entirely different kind of help. Source for my claims: Designing system architecture is literally my job.