this post was submitted on 26 Jul 2024
128 points (95.1% liked)

Technology

59347 readers
4778 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 months ago (1 children)

I’m extrapolating from history.

15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.

It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.

[–] [email protected] 7 points 3 months ago

I don’t see why these types of errors would be any different.

Well it is easy to see when you understand what LLMs actually do and how it is different from what humans do. Humans have multiple ways to correct errors and we do it all the time, intuitively. LLMs have none of these ways, they can only repeat their training (and not even hope for the best, because to hope is human again)