this post was submitted on 12 Sep 2024
187 points (88.2% liked)
Technology
59148 readers
2533 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So they slapped some reinforcement learning on top of their LLM and are claiming that gives it “reasoning capabilities”? Or am I missing something?
It's like 3 lms on top of eachother in a trenchcoat, and appau a calculator so it gets math right
No the article is badly worded. Earlier models already have reasoning skills with some rudimentary CoT, but they leaned more heavily into it for this model.
My guess is they didn't train it on the 10 trillion words corpus (which is expensive and has diminishing returns) but rather a heavily curated RLHF dataset.