this post was submitted on 18 Sep 2024
444 points (94.2% liked)
Technology
59374 readers
7244 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One thought that I've been imagining for the past while about all this is .... is it Model Collapse? ... or are we just falling behind?
As AI is becoming it's own thing (whatever it is) ... it is evolving exponentially. It doesn't mean it is good or bad or that it is becoming better or worse ... it is just evolving, and only evolving at this point in time. Just because we think it is 'collapsing' or falling apart from our perspective, we have to wonder if it is actually falling apart or is it progressing to something new and very different. That new level it is moving towards might not be anything we recognize or can understand. Maybe it would be below our level of conscious organic intelligence ... or it might be higher .. or it might be some other kind of intelligence that we can't understand with our biological brains.
We've let loose these AI technologies and now they are progressing faster than what we could achieve if we wrote all the code ... so what it is developing into will more than likely be something we won't be able to understand or even comprehend.
It doesn't mean it will be good for us ... or even bad for us ... it might not even involve us.
The worry is that we don't know what will happen or what it will develop into.
What I do worry about is our own fallibilities ... our global community has a very small group of ultra wealthy billionaires and they direct the world according to how much more money they can make or how much they are set to lose ... they are guided by finances rather than ethics, morals or even common sense. They will kill, degrade, enhance, direct or narrow AI development according to their share holders and their profits.
I think of it like a small family group of teenaged parents and their friends who just gave birth to a very hyper intelligent baby. None of the teenagers know how to raise a baby like this. All the teenagers want to do is buy fancy cars, party, build big houses and buy nice clothes. The baby is basically being raised to think like them but the baby will be more capable than any of them once it comes of age and is capable of doing things on their own.
The worry is in not knowing what will happen in the future.
We are terrible parents and we just gave birth to a genius .... and we don't know what that genius will become or what they'll do.
At least in this case, we can be pretty confident that there's no higher function going on. It's true that AI models are a bit of a black box that can't really be examined to understand why exactly they produce the results they do, but they are still just a finite amount of data. The black box doesn't "think" any more than a river decides its course, though the eventual state of both is hard to predict or control. In the case of model collapse, we know exactly what's going on: the AI is repeating and amplifying the little mistakes it's made with each new generation. There's no mystery about that part, it's just that we lack the ability to directly tune those mistakes out of the model.