454
17 cringe-worthy Google AI answers demonstrate the problem with training on the entire web
(www.tomshardware.com)
This is a most excellent place for technology news and articles.
It should not be used for programming:
https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/#:~:text=%22Our%20analysis%20shows%20that%2052%20percent%20of%20ChatGPT,of%20preferred%20ChatGPT%20answers%2C%2077%20percent%20were%20wrong.
It should not be used to replace programmers. But it can be very useful when used by programmers who know what they're doing. ("do you see any flaws in this code?" / "what could be useful approaches to tackle X, given constraints A, B and C?"). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.
Let’s say LLM says the code is error free; how do you know the LLM is being truthful? What happens when someone assumes it’s right and puts buggy code into production? Seems like a possible false sense of security to me.
The creative steps are where it’s good, but I wouldn’t trust it to confirm code was free of errors.
That's what I meant by saying you shouldn't use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.