23
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Apr 2026
23 points (92.6% liked)
TechTakes
2557 readers
37 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as "AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm." She makes the analogy to drug policy.
I think she makes a very compelling argument, and I'm still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you're likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you're letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.
I just watched the whole thing. She makes a consistent case.
I felt a little called out by the being tolerant bit. I for sure haven't had great success in talking to close people about their AI use. And I was maybe a little too cold to colleagues, who tried to get ahead of the AI literacy circus with good intentions, although I grudgingly agreed that they are right.
Maybe I don't meet enough randos to get feeling on the level of pervasiveness of chatbots. Maybe it's a personality thing; I worked myself out of depression mostly by disciplining myself and stopping to buy my own excuses, and that's kind of how I approach every problem now. That sure isn't a vibe that most people respond to.
There was one part of my AI beliefs that wasn't adressed. Besides the "front-end" and "back-end" harms, that can be mitigated, the tech as a whole still seems trash to me. That may be boomerism setting in, but chatbots just feel counter to and displacing my positive vision for a social fabric, be it for responsible professional communities or for interpersonal connections.
(I do buy into the use-case for a context-sensitive search engine, e.g. for walls of legalese. But the current framing of the tools is just so harmful, even that use is hazardous as seen in the anecdote.)
I don't meet that many people either, but I get the general vibe that people understand that it's somewhat shitty, but it still fills a social need (compare/contrast horoscopes).
Completely anecdotally, I recently saw a short video of a french woman, saying to an impressive know-it-all-tv-quizz-champion [intended as a compliment I think]: "Wow you sound like Chat GPT!"
Too me that was very illustrative of the perception of Chat GPT from a less tech-literate perspective.