454
17 cringe-worthy Google AI answers demonstrate the problem with training on the entire web
(www.tomshardware.com)
This is a most excellent place for technology news and articles.
It should not be used for programming:
https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/#:~:text=%22Our%20analysis%20shows%20that%2052%20percent%20of%20ChatGPT,of%20preferred%20ChatGPT%20answers%2C%2077%20percent%20were%20wrong.
It should not be used to replace programmers. But it can be very useful when used by programmers who know what they're doing. ("do you see any flaws in this code?" / "what could be useful approaches to tackle X, given constraints A, B and C?"). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.
The article I posted references a study where chatgpt was wrong 52% of the time and verbose 77% of the time.
And that it was believed to be true more than it actually was. And the study was explicitly on programming questions.
Yeah, I saw. But when I'm stuck on a programming issue, I have a couple of options:
Sure, LLMs may not be perfect, but not having them as an option is worse, and way slower.
In my experience - even when the code it generates is wrong, it will still send you in the right direction concerning the approach. And if it keeps spewing out nonsense, that's usually an indication that what you want is not possible.
I am completely convinced that people who say LLMs should not be used for coding.....
Either do not do much coding for work, or they have not used an LLM when tackling a problem in an unfamiliar language or tech stack.
I haven't had need to do it.
I can ask people I work with who do know, or I can find the same thing ChatGPT provides in either la huage or project documentation, usually presented in a better format.
Let’s say LLM says the code is error free; how do you know the LLM is being truthful? What happens when someone assumes it’s right and puts buggy code into production? Seems like a possible false sense of security to me.
The creative steps are where it’s good, but I wouldn’t trust it to confirm code was free of errors.
That's what I meant by saying you shouldn't use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.
I'm not entirely sure why you think it shouldn't?
Just because it sucks at one-shotting programming problems doesn't mean it's not useful for programming.
Using AI tools as co-pilots to augment knowledge and break into areas of discipline that you're unfamiliar with is great.
Is it useful to kean on as if you were a junior developer? No, absolutely not. Is it a useful tool that can augment your knowledge and capabilities as a senior developer? Yes, very much so.
They answered this further down - they never tried it themselves.
I never said that.
I said I found the older methods to be better.
Any time I've used it, it either produced things verbatim from existing documentation examples which already didn't do what I needed, or it was completely wrong.
“Light” programming? ‘Find the errant period’ sort of thing?
It does not perform very well when asked to answer a stack overflow question. However, people ask questions differently in chat than on stack overflow. Continuing the conversation yields much better results than zero shot.
Also I have found ChatGPT 4 to be much much better than ChatGPT 3.5. To the point that I basically never use 3.5 any more.