23
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Apr 2026
23 points (92.6% liked)
TechTakes
2549 readers
54 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as "AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm." She makes the analogy to drug policy.
I think she makes a very compelling argument, and I'm still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you're likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you're letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.
Sounds kind of like the Baldur Bjarnason strategy but for your coworkers instead of your boss.
I can see the value of someone with a critical understanding diving into the technology, so they can talk others down from the ledge.
But you also need the social pressure to maintain some slop-free spaces. Not everyone can be asked to accomodate recovering slopaholics.