this post was submitted on 07 Oct 2024
84 points (100.0% liked)
TechTakes
1489 readers
93 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wouldn't removal of the data effect on the model require a basic retraining? A bit too late for all the open source ones out there.
That's a good question, because there is nuance here! It's interesting because while working on similar projects I also ran into this issue. First off, it's important to understand what your obligation is and the way that you can understand data deletion. No one believes it is necessary to permanently remove all copies of anything, anymore than it is necessary to prevent all forms of plagairism. No one is complaining that is possible at all to plaigarise, we're complaining that major institutions are continuing to do so with ongoing disregard of the law.
Only maximalists fall into the trap that thinking of the world in binary sense: either all in or do nothing at all.
For most of us, it's about economics and risk profiles. Open source models get trained continuously over time, there won't be one version. Saying that open source operators do have some obligations to in good faith to curate future training to comply has a long tail impact on how that model evolves. Previous PII or plaigarized data might still exist, but its value and novelty and relevance to economic life goes down sharply over time. No artist or writer argues that copyright protections need to exist forever. They literally, just need to have survival working conditions, and the respect for attribution. The same thing with PII: no one claims that they must be completely anonymous. They just desire cyber crime to be taken seriously rather than abandoned in favor of one party taking the spoils of their personhood.
Also, yes, there are algorithms that can control how further learning promotes or demotes growth and connections relative to various policies. Rather than saying that any one policy is perfect, a mere willingness to adopt policies in good faith (most such LLM filters are intentionally weak so that those with $$ and paying for API access can outright ignore them, while they can turn around and claim it can't be solved too bad so sad).
Yes. It is possible to perturb and influence the evolution of a continuously trained neural network based on external policy, and they're carefully lying through omision when they say they can't 100% control it or 100% remove things. Fine. That's, not necessary, neither in copyright nor privacy law. Never been.
Are you sure that meets the letter of the law? GDPR would say "fuck that version of nuance, fix it." Microsoft now tries filtering on Bing Copilot in Germany, to variable results. What does the relevant California law say and mean?
I am not a lawyer. But you wouldn't be surprised to hear that
My commitment is that maximalism or strict binary assumptions won't work on either end and don't satisfy what anyone truly wants or needs. If we're not careful about what it takes to move the needle, we agree with them by saying 'it can't be done, so it wont be done.'
What's truly lovely about GDPR is that it is maximalist, strict, and binary. For any "but..." of a corporation the GDPR answer is "fucks given: 0, this is YOUR problem, comply or perish."
Which makes it so baffling every time a techbro fails to understand it or claims "GDPR doesn't apply to me." Just don't fuck around with PII and don't collect any without explicit permission from the user! How is this difficult?!
I'm referring specifically to this where they could only put in a shaky bodge.
When you don't know an example, consider looking it up and not just waffling anyway.