this post was submitted on 31 Jul 2024
13 points (93.3% liked)

General Programming Discussion

7796 readers
9 users here now

A general programming discussion community.

Rules:

  1. Be civil.
  2. Please start discussions that spark conversation

Other communities

Systems

Functional Programming

Also related

founded 5 years ago
MODERATORS
 

I initially wrote this as a response to this joke post, but I think it deserves a separate post.

As a software engineer, I am deeply familiar with the concept of rubber duck debugging. It's fascinating how "just" (re-)phrasing a problem can open up path to a solution or shed light on own misconceptions or confusions. (As and aside, I find that among other things that have similar effect is writing commit messages, and also re-reading own code under a different "lighting": for instance, after I finish a branch and push it to GitLab, I will sometimes immediately go and review the code (or just the diff) in GitLab (as opposed to my terminal or editor) and sometimes realize new things.)

But another thing I've been realizing for some time is that these "a-ha" moments are always mixed feelings. Sure it's great I've been able to find the solution but it also feels like bit of a downer. I suspect that while crafting the question, I've been subconsciously also looking forward for the social interaction coming from asking that question. Suddenly belonging to a group of engineers having a crack at the problem.

The thing is: I don't get that with ChatGPT. I don't get that since there's was not going to be any social interaction to begin with.

With ChatGPT, I can do the rubber duck debugging thing without the sad part.

If no rubber duck debugging happens, and ChatGPT answers my question, then that's obvious, can move on.

If no rubber duck debugging happens, and ChatGPT fails to answer my question, then by the time at least I got some clarity about the problem which I can re-use to phrase my question with an actual community of peers, be it IRC channel, a Discord server or our team Slack channel.


So I'm wondering, do other people tend to use LLMs as these sort of interactive rubber ducks?

And as a bit of a stretch of this idea---could LLM be thought of as a tool to practice asking question, prior to actually asking real people?


PS: I should mention that I'm also not a native English speaker (which I guess is probably obvious by now by my writing) so part of my "learning asking question" is also learning it specifically in English.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 months ago

Technically it's a program so it can't move, but what I think it means, to give it the befit of the doubt, is an object that can't talk to you.