this post was submitted on 21 Jun 2024
36 points (73.7% liked)
Technology
59148 readers
2006 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There are very little people in the world that understand llms on such a deep technological level as Ilya.
I honestly don’t think there is much else in the world he is interested in doing other then work on aligning powerful ai.
Wether his almost anti commercial style end up accomplishing much i don't know but his intention are literal and clear.
What do you mean by anti-commercial style? I am not from North America, but this seems like pretty typical PR copytext for local tech companies. Lot's of pomp, banality, bombast and vague assertions of caring about the world. It almost reads like satire at this point, like they're trying to take the piss.
If his intentions are literal and clear, what does he mean by “superintelligence” (please be specific) and in what way is it safe?
This is the guy who turned against Sam for being to much about releasing product. I don't think he plans on delivering much product at all. The reason to invest isn't to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.
A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.
Well good news. If the product you're imagining is 'Skynet' or a 'god-mommy' both of those are science fiction and we don't need whatever this bullshit is to save us
Your entitled to that opinion, so are others. Sutsekeva may be an actual loony.. or an absolute genius. Or both, that isn't up to debate here.
I am just explaining what this about because if you think this is “just another money raiser” you obviously havent paid enough attention to who this guy is exactly.
Super intelligence in Artificial intelligence is a wel defined term btw, in case your still confused. You may have seen them plaster on stuff like buzzwords but all of these definitions precede AI hype of last years.
ML = machine learning, algorithms that improve over time.
AI = artificial intelligence, machine learning with complex logic, mimicking real intelligence. <- we are here
AGI = artificial general intelligence. An Ai agent that functions intelligently at a level indistinguishable from a real human. <- expert estimate this will be archived before 2030
ASI = Artificial Super Intelligence Agi that transcends human intelligence and capacities in every wat.
It may not sound real to you but if you ever visited the singularity sub on reddit you will see how a great number of people do.
Also everything is science fiction till its not. Horseless chariots where science fiction so where cordless phones. The first airplane went up in 1903, 66 years later we landed in the moon.
The point is not that we can't imagine speculative technologies. The point is that this is a grift which distracts from the real and present threat of AI like the threats to privacy, artists' livelihoods and the internet itself which is being poisoned by LLM generated content