this post was submitted on 06 Oct 2024
111 points (88.8% liked)
Technology
59123 readers
2310 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm pretty sure that would be three orders of magnitude.
They're not talking about the same thing.
That's in reference to the size of the model itself.
That's in reference to the size of the training data that was used to train the model.
Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.
After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.