this post was submitted on 30 Sep 2024
196 points (93.0% liked)

Technology

35124 readers
300 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 52 points 2 months ago (2 children)

It's a classic BigTech marketing trick. They are the only one able to build "it" and it doesn't matter if we like "it" or not because "it" is coming.

I believed in this BS for longer than I care to admit. I though "Oh yes, that's progress" so of course it will come, it must come. It's also very complex so nobody else but such large entities with so much resources can do it.

Then... you start to encounter more and more vaporware. Grandiose announcement and when you try the result you can't help but be disappointed. You compare what was promised with the result, think it's cool, kind of, shrug, and move on with your day. It happens again, and again. Sometimes you see something really impressive, you dig and realize it's a partnership with a startup or a university doing the actual research. The more time passes, the more you realize that all BigTech do it, across technologies. You also realize that your artist friend did something just as cool and as open-source. Their version does not look polished but it works. You find a KickStarter about a product that is genuinely novel (say Oculus DK1) and has no link (initially) with BigTech...

You finally realize, year after year, you have been brain washed to believe only BigTech can do it. It's false. It's self serving BS to both prevent you from building and depend on them.

You can build, we can build and we can build better.

Can we build AGI? Maybe. Can they build AGI? They sure want us to believe it but they have lied through their teeth before so until they do deliver, they can NOT.

TL;DR: BigTech is not as powerful as they claim to be and they benefit from the hype, in this AI hype cycle and otherwise. They can't be trusted.

[–] [email protected] 9 points 2 months ago (2 children)

It's one thing to claim that the current machine learning approach won't lead to AGI, which I can get behind. But this article claims AGI is impossible simply because there are not enough physical resources in the world? That's a stretch.

[–] [email protected] 6 points 2 months ago (1 children)

I haven't seriously read the article for now unfortunately (deadline tomorrow) but if there is one thing that I believe is reliable, it's computational complexity. It's one thing to be creative, ingenious, find new algorithms and build very efficient processors and datacenters to make things extremely efficient, letting us computer things increasingly complex. It's another though to "break" free of complexity. It's just, as far as we currently know, is impossible. What is counter intuitive is that seemingly "simple" behaviors scale terribly, in the sense that one can compute few iterations alone, or with a computer, or with a very powerful set of computers... or with every single existing computers... only to realize that the next iteration of that well understood problem would still NOT be solvable with every computer (even quantum ones) ever made or that could be made based on resources available in say our solar system.

So... yes, it is a "stretch", maybe even counter intuitive, to go as far as saying it is not and NEVER will be possible to realize AGI, but that's what their paper claims. It's a least interesting precisely because it goes against the trend we hear CONSTANTLY pretty much everywhere else.

[–] [email protected] 5 points 2 months ago

PS: full disclosure, I still believe self-hosting AI is interesting, cf my notes on it https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but that doesn't mean AGI can be reached, even less that it'd be "soon". IMHO AI itself as a research field is interesting enough that it doesn't need grandiose claims, especially not ones leading to learned helplessness.

[–] [email protected] 1 points 2 months ago (1 children)

Maybe if they keep using digital computers. What they need is an analogue system. It's much more efficient for this kind of work.

[–] [email protected] 1 points 2 months ago

Saw a great video about this (project is still ongoing).

[–] [email protected] 7 points 2 months ago (1 children)

And the big tech companies also stand to benefit from overhyping their product to the point of saying it will take over the world. They look better for investors and can justify laws saying they should be the only arbiters of this technology to "keep it out of criminal hands" while happily serving the criminals for a fee.

[–] [email protected] 3 points 2 months ago

Indeed, AKA the OpenAI playbook.