this post was submitted on 23 May 2025
199 points (92.3% liked)
Technology
70285 readers
4955 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unlike vibe coding, asking an LLM how to access some specific thing in a library when you're not even sure what to look for is a legitimate use case.
You're not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I'm not a professional coder, but have at least some experience and I've tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it's totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.
And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There's legitimate use cases too of course, but you really need to have at least some idea on what you're doing to evaluate the answers LLMs give you.
Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I've only used it on relatively popular libraries with stable apis, and don't ask for complex things.
Until it gives you a list of books and two thirds don't exist and the rest aren't even in the library.
The worst I've got so far hasn't been hallucinated "books", but stuff like functions from a previous major version of the api mixed in.
I'm most of the time on the opposite side of the AI arguments, but I don't think it's unreasonable to use an LLM as a documentation search engine. The article itself also points out copilot's usefulness for similar things, but seems the opinion lost the popular vote here.
I've had great success with using ChatGPT to diagnose and solve hardware issues. There's plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only way to be sure it's correct is to actually know what you're asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.
If you don't know what the code does, you're vibe coding. The point is to not waste time searching. Obviously you're supposed to check the docs yourself, but that's much less tedious and time consuming than finding it, if the docs are hard to navigate.