this post was submitted on 03 Sep 2024
1580 points (97.8% liked)

Technology

59347 readers
4778 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 months ago

Yeah, all training ends up being pattern learning in some form or fashion. But acceptable patterns end up matching logic. So for example if you ask ChatGPT a question, it will use its learned pattern to provide its estimate of the correct ouptut. That pattern it's learned encompasses/matches logical processing of the user input and the output that it's been trained to see as acceptable output. So with enough training, it should and does go from simple memorization of individual examples to learning these broad acceptable rules, like logic (or a pattern that matches logical rules and "understanding of language") so that it can provide acceptable responses to situations that it hasn't seen in training. And because of this pattern learning and prediction nature of how it works, it often "hallucinates" information like citations (creating a novel citation matching the pattern its seen instead of the exact citation that you want, where you actually want memorized information) that you might ask of it as sources for what its telling you.