leftzero

joined 1 year ago
[–] [email protected] 0 points 5 months ago (1 children)

Some of them are inventing completely new ways of doing things

No, they're not. All the money is now on the LLM autocomplete chatbots.

Real progress on AI won't resume until after the LLM bubble has burst. (And even then investors will probably be wary of putting money in AI for probably a few decades, because LLMs are being marked as AI despite having little to do with it.)

It's quite depressing, really.

[–] [email protected] -1 points 5 months ago (1 children)

All the money's going into the LLM bubble, so there won't be any left for actual AI research until it bursts.

[–] [email protected] 1 points 5 months ago (2 children)

I'm not talking about "machines" or any other generic term.

I'm talking specifically about LLMs. And their limitations are evident. For instance, maths is one of the many things they can't do (and will never be able to do in any efficient way).

We have indeed, developed programs that play chess better than people (though sadly, until the LLM bubble pops we probably won't get any further). But they're not LLMs, or anything resembling an LLM. Because one of the other many things an LLM can't do is play games of skill. Or reason. Or solve puzzles. Or even have a concept of strategy.

LLMs, again, can only do one single thing. And that's to pick up the one card from their deck that's been picked up most often after the sequence of cards on the table according to their training model.

That's all they do. That's all they'll ever be able to do. Because that's how they work. And, sure, with that you can make it look like they're holding a conversation (until you ask them something that isn't in their model), but that's it.

They'll put words after another according to statistics (not, keep that in mind, according to meaning, or strategy, or anything like that; they don't, and can't know or care what the words mean, or whether the sentence they've put together makes any sense, or whether what it's stating is true or false), and that's that.

They won't play chess, they won't write good innovative code, they won't write original stories, and they won't drive your car.

We don't need to know how what we call consciousness works to know that. We just need to know how LLMs work. And that we most definitely do.

[–] [email protected] 2 points 5 months ago (4 children)

Because there are many aspects of what we understand as "actual thinking" (understanding concepts, learning, or solving puzzles, for instance) that LLMs are fundamentally incapable of achieving no matter how larger or more complex we make them or how much we optimise them.

They do one single thing (which, granted, they do relatively well): they take an input, they apply it to every token in their training data, generating a score for each of them, and they output the one with the highest score. And that's all they do.

And that's why, for instance, you'll never be able to make a LLM that's any good at playing chess, because there simply wouldn't be enough atoms in the universe for it to store all possible states of the game, which it would need to have in its training model in order to auto complete its next move (and that's not even accounting for the actual score computation, both in space and time).

They're a cool fancy gimmick, possibly useful in certain cases as long as you can account for their hallucinations, but they're not any closer to actual intelligence than Eliza ever was.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (6 children)

LLMs are incapable of "recognising" any patterns they haven't been trained on.

And they don't really even recognise those, they're just fancy auto complete engines, simply outputting the highest scored token from their training base based on their input.

They're pattern matching machines; there's no recognition, inner modelling of new knowledge, self referencing, or understanding of any kind, merely blind statistics.

They're just bigger and fancier Eliza's, and just as distant as Eliza was from any practical form of intelligence, artificial or natural.

While I personally do believe that achieving AGI¹, on a Turing machine is possible, LLMs and how they work are an excellent example in support of John Searle's arguments against it with his Chinese room though experiment.

1— Or at least something equivalent to human intelligence, or better, in the measures by which we consider ourselves to be intelligent, though it's arguable whether we can really be considered intelligent at all, or we're just better, more complex, Chinese rooms.

[–] [email protected] -1 points 5 months ago (3 children)

Exactly, but LLMs are preventing further advances in AGI.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (8 children)

No, I'm a self-referential pattern recognition machine.

[–] [email protected] 6 points 5 months ago

Proper AI definitely could.

LLMs..? Not a chance, absolute dead end, just a modern Eliza.

[–] [email protected] 14 points 5 months ago (18 children)

LLMs aren't going to be designing anything; they're just fancy auto complete engines with a tendency to hallucinate facts they haven't been trained on.

LLMs are preventing real advancements in AI by focusing the attention and funding into what's evidently a dead end.

[–] [email protected] 16 points 6 months ago

It's called xitter, and it's full of xit(s).

[–] [email protected] 19 points 6 months ago (3 children)

Meh, good luck with that.

All my Reddit comments have just said “Comment redacted in protest against Reddit's deranged attacks against third party apps, the community, and common sense. See you'll in Lemmy or Kbin once this embarrassment of a site is done enshittifying itself out of existence. Monetize this, u/spez, you greedy little pigboy. 🖕” since I edited them before moving here. 🤷‍♂️

[–] [email protected] 3 points 6 months ago

The thing is that, as you said, it's happened several times before. Beta Ray Bill, Red Norvell, Eric Masterson... it's been established for a long time that in the Marvel universe the title of Thor, God of Thunder, may be held by people who aren't Thor Odinson (and that he might occasionally lose it, though so far only temporarily, at least in the main continuity).

view more: ‹ prev next ›