118
submitted 2 days ago by alyaza@beehaw.org to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] luciole@beehaw.org 71 points 2 days ago

It's hard having two decades of experiences in a domain I suddenly find myself at odds with. Reading about others having the same qualm reassures me that I'm not going crazy. On the other I feel drawn further into an untenable contradictory position.

Once in a while I give in. It's typically when I'm faced with a non trivial problem I realize will take me days of learning before I have any chance of tackling it. My colleagues start suggesting it or share some slop to "help out". So I think fuck it I'll study later for now AI will solve it I need this ticket closed asap. I fire up a "decent" paid model and I start feeding it context. Every time it's a nightmare. Hours of trying stuff that doesn't stick, of questioning, of arguing with a chat bot, of wading through "here are the facts" and "good catch" and "I owe you an apology". It's not a shortcut it's a fucking dead end. Then the bitter aftertaste can only be cleansed with cold hard time consuming actual learning.

[-] Furbag@pawb.social 6 points 1 day ago

I am so glad to hear that I am not the only one who finds AI coding to be an almost futile exercise. I spend more time talking to the damn robot trying to get it to fix problems than I would if I had just done it more slowly and deliberately in the programming language I am familiar with, or just circumvented the automation effort and done the task manually. All three seem to take about the same amount of time.

[-] resipsaloquitur@lemmy.cafe 20 points 2 days ago

At least after hours of arguing with a bot and burning tons of money and energy you have a pile of code you can’t understand without paying a chatbot.

[-] luciole@beehaw.org 16 points 2 days ago

But will the chat bot understand itself? It's fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn't it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.

[-] resipsaloquitur@lemmy.cafe 18 points 2 days ago* (last edited 2 days ago)

No, but it will gladly pretend to understand it. For a price.

[-] Kichae@lemmy.ca 8 points 1 day ago

That's a problem, but the bigger issue is how the commercial models are tuned to tell you that you are never wrong.

Or, more to the point, telling people who don't know what they're talking about that they're never wrong.

[-] resipsaloquitur@lemmy.cafe 7 points 1 day ago

We see the appeal to middle and upper management.

this post was submitted on 03 May 2026
118 points (96.8% liked)

Technology

42870 readers
280 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS