this post was submitted on 18 Sep 2024
444 points (94.2% liked)
Technology
59374 readers
3794 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Every single one of us, as kids, learned the concept of "garbage in, garbage out"; most likely in terms of diet and food intake.
And yet every AI cultist makes the shocked pikachu face when they figure out that trying to improve your LLM by feeding it on data generated by literally the inferior LLM you're trying to improve, is an exercise in diminishing returns and generational degradation in quality.
Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.
Because it's not actually always true that garbage in = garbage out. DeepMind's Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.
Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.
Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn't a good definition of "good" or "bad" inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.
So it's less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn't actually work that well when you're over-fitting the training data with new stuff your model thinks might be "good."
good commentary, covered a lot of ground - appreciate the effort to write it up :)