372
this post was submitted on 25 Apr 2025
372 points (96.5% liked)
Technology
69347 readers
2971 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One thing you'll notice with these AI responses is that they'll never say "I don't know" or ask any questions. If it doesn't know it will just make something up.
You clearly haven't experimented with AI much. If you ask most models a question that doesn't have an answer, they will respond that they don't know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.
You clearly haven't experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.
"How can I automate outlook 2020 to do X?"
'You do XYZ'
me, after looking it up"that's only possible in older versions"
'You are totally right, you do IJK'
"that doesn't achieve what i asked"
'Correct, you can't do it.'
And don't get me started on APIs of actual frameworks... I've wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn't give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying "chatgpt says so" is fucking crazy.
When chatgpt says something I know is incorrect I ask for sources and there's fucking none. Because it not possible my dude.
And this is the best case scenario. Most of the time it will be:
Useless shit you can't trust.
I'd prefer if I didn't have to iterate twice...