99
  1. Lmao.

  2. They were "troubleshooting a basic web app" - this is something you'd hear from someone learning how to program, not someone who runs a youtube channel trying to make a brand from programming.

  3. In doing so, they needed to "clear their cache" by which they mean... their browser cache? webserver cache? idk but something that shouldn't be so difficult that you'd delegate to an LLM (nor something an LLM should get so horribly wrong).

  4. All that aside, I can see that being some wide eyed naïve moron would let you believe in the magic for a little bit. Truly, I've been there. What gets me is how they trail off their reddit thread quoting what appears to be verbatim LLM marketing output about how they were the catalyst for Google putting guardrails on the rm -rf generator which they're not even paying for. Google really cares about you, my sweet special sunbeam.

Fuck me, AI slop coders are finding out in real time.

you are viewing a single comment's thread
view the rest of the comments
[-] supafuzz@hexbear.net 2 points 5 days ago* (last edited 5 days ago)

for real, the OpenClaw ("the AI that actually does things") home page cites the following as standout use cases:

"Clears your inbox, sends emails, manages your calendar, checks you in for flights."

Come the fuck on, this is what we're burning down the world for? (And then it turns out that it can't even do that shit. Meta Security Researcher's AI Agent Accidentally Deleted Her Emails)

Even if I were flying so much that checking in were some kind of actual burden - and in this fantasy scenario I didn't have a real life assistant who would do it for me - I wouldn't trust the chatbot to do it without accidentally putting me on the terror watchlist somehow.

this post was submitted on 25 Feb 2026
99 points (100.0% liked)

technology

24270 readers
296 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS