this post was submitted on 23 May 2025
199 points (92.0% liked)
Technology
70285 readers
4955 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is the best thing that happened to us for ages: now we can do whatever we do without the pain and humiliation of spending enormous amount of time seeking through some shitty documentation or, in too many cases, straightforwardly bruteforcing the libs by guessing what the fuck parameters this or that function needs.
Now I can just ask an AI if there is a method in this class that does something I need and receive a useful answer, not a RTFM like in the times you're so fond of.
Yes, as long as the information you get from the AI is correct. Which we know is absolutely not the case. That is the issue. If AI's output could be trusted 100% things would be wildly different.
Unlike vibe coding, asking an LLM how to access some specific thing in a library when you're not even sure what to look for is a legitimate use case.
You're not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I'm not a professional coder, but have at least some experience and I've tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it's totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.
And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There's legitimate use cases too of course, but you really need to have at least some idea on what you're doing to evaluate the answers LLMs give you.
Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I've only used it on relatively popular libraries with stable apis, and don't ask for complex things.
Until it gives you a list of books and two thirds don't exist and the rest aren't even in the library.
The worst I've got so far hasn't been hallucinated "books", but stuff like functions from a previous major version of the api mixed in.
I'm most of the time on the opposite side of the AI arguments, but I don't think it's unreasonable to use an LLM as a documentation search engine. The article itself also points out copilot's usefulness for similar things, but seems the opinion lost the popular vote here.
I've had great success with using ChatGPT to diagnose and solve hardware issues. There's plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only way to be sure it's correct is to actually know what you're asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.
If you don't know what the code does, you're vibe coding. The point is to not waste time searching. Obviously you're supposed to check the docs yourself, but that's much less tedious and time consuming than finding it, if the docs are hard to navigate.
Right it can totally do that safely and axcurately despite not being able to count the Rs in strawberry.
So if library users stop communicating with each other and with the library authors, how are library authors gonna know what to do next? Unless you want them to talk to AIs instead of people, too.
At some point, when we’ve disconnected every human from each other, will we wonder why? Or will we be content with the answer “efficiency”?
I'd say both is true. If I need a quick meal I'm glad I can just order something ready-made, but I also enjoy to cook an intricate meal for hours. OP is maybe worried that people forget about the latter and only prefer the ready-made solution.
I think chapter 2 does a good job presenting the advantages.
That was why it was so entertaining, getting a lil homebrew to run on the Nintendo DS was fun.