this post was submitted on 24 Jan 2024
7 points (54.1% liked)
Technology
59174 readers
2689 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find this extraordinarily unconvincing. Firstly it's based on the idea that random graphs are a great model for LLMs because they share a single superficial similarity. That's not science, that's poetry. Secondly, the researchers completely misunderstand how LLMs work. The assertion that a sentence could not have appeared in the training set does not prove anything. That's expected behaviour. "stochastic parrot" wasn't supposed to mean that it only regurgitates text that it's already seen, rather that the text is a statistically plausible response to the input text based on very high dimensional feature vectors. Those features definitely could relate to what we think of as meaning or concepts, but they're meaning or concepts that were inherent in the training material.