this post was submitted on 01 Aug 2024
2230 points (99.0% liked)

Technology

59374 readers
3671 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 54 points 3 months ago (3 children)

And the system doesn't know either.

For me this is the major issue. A human is capable of saying "I don't know". LLMs don't seem able to.

[–] [email protected] 35 points 3 months ago (2 children)

Accurate.

No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There's no concept of not knowing the answer, because they don't know anything in the first place.

[–] [email protected] 18 points 3 months ago (1 children)

The worst for me was a fairly simple programming question. The class it used didn't exist.

"You are correct, that class was removed in OLD version. Try this updated code instead."

Gave another made up class name.

Repeated with a newer version number.

It knows what answers smell like, and the same with excuses. Unfortunately there's no way of knowing whether it's actually bullshit until you take a whiff of it yourself.

[–] [email protected] 5 points 3 months ago

So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

From what I’ve seen you’ll need an iron stomach.