this post was submitted on 05 Feb 2024
667 points (87.9% liked)

Memes

45655 readers
1812 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 

I think AI is neat.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 172 points 9 months ago (66 children)

They're kind of right. LLMs are not general intelligence and there's not much evidence to suggest that LLMs will lead to general intelligence. A lot of the hype around AI is manufactured by VCs and companies that stand to make a lot of money off of the AI branding/hype.

[–] [email protected] -4 points 9 months ago* (last edited 9 months ago) (14 children)

Yes. But the more advanced LLMs get, the less it matters in my opinion. I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output. I'm sure that already LLMs can surpass some humans, at least at certain disciplines. In a couple years the difference of a parrot-box and something actually intelligent will only merely show at the very fringes of massively complicated tasks. And that is way beyond the capability threshold that allows to do nasty stuff with it, to shed a dystopian light on it.

[–] [email protected] 9 points 9 months ago (2 children)

I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output.

You're making a huge assumption; that an advanced parrot produces the same output as something with general intelligence. And I reject that assumption. Something with general intelligence can produce something novel. An advanced parrot can only repeat things it's already heard.

[–] [email protected] 4 points 9 months ago (1 children)

How do you define novel? Because LLMs absolutely have produced novel data.

[–] [email protected] 0 points 9 months ago

LLMs can't produce anything without being prompted by a human. There's nothing intelligent about them. Imo it's an abuse of the word intelligence since they have exactly zero autonomy.

[–] [email protected] -3 points 9 months ago (1 children)

I use LLMs to create things no human has likely ever said and it's great at it, for example

'while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn'

When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

You can't ask a parrot to invent words with meaning and use them in context, that's a step beyond repetition - of course it's not full dynamic self aware reasoning but it's certainly not being a parrot

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

Producing word salad really isn't that impressive. At least the art LLMs are somewhat impressive.

[–] [email protected] 2 points 9 months ago

If you ask it to make up nonsense and it does it then you can't get angry lol. I normally use it to help analyse code or write sections of code, sometimes to teach me how certain functions or principles work - it's incredibly good at that, I do need to verify it's doing the right thing but I do that with my code too and I'm not always right either.

As a research tool it's great at taking a basic dumb description and pointing me to the right things to look for, especially for things with a lot of technical terms and obscure areas.

And yes they can occasionally make mistakes or invent things but if you ask properly and verify what you're told then it's pretty reliable, far more so than a lot of humans I know.

load more comments (11 replies)
load more comments (62 replies)