this post was submitted on 30 Sep 2024
196 points (93.0% liked)

Technology

34889 readers
461 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 month ago* (last edited 1 month ago) (7 children)

This is a silly argument:

[..] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we'd even get close,’ Olivia Guest adds.

That's as shortsighted as the "I think there is a world market for maybe five computers" quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren't the path to AGI, but there's no reason to think we can't achieve it in general unless you're religious.

EDIT: From the paper:

The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

That's a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn't mean it has any relationship to the real world.

[–] [email protected] 9 points 1 month ago (5 children)

This is a gross misrepresentation of the study.

That's as shortsighted as the "I think there is a world market for maybe five computers" quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.

That's not their argument. They're saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.

Maybe transformers aren't the path to AGI, but there's no reason to think we can't achieve it in general unless you're religious.

They're not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.

That's a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn't mean it has any relationship to the real world.

That's not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc...), and then present a computational proof that shows that this is in contradiction with other logical proofs.

Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There's a technical explanation in the paper that I'm not going to try and rehash since it's been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It's not a strawman, it's a hard proof of why it's impossible, like proving that pi has infinite decimals or something.

Ergo, anyone who claims that AGI is around the corner either means "a good AI that can demonstrate some but not all human behaviour" or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we'd still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don't offer a thought experiment, they provide a computational proof for this.

[–] [email protected] 0 points 1 month ago

There's a number of major flaws with it:

  1. Assume the paper is completely true. It's just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? That's been true for other problems, why not this one?
  2. It proves something in a model. So what? Prove that the result applies to the real world
  3. Replace "human-like" with something trivial like "tree-like". The paper then proves that we'll never achieve tree-like intelligence?

IMO there's also flaws in the argument itself, but those are more relevant

load more comments (4 replies)
load more comments (5 replies)