this post was submitted on 15 Oct 2024
494 points (96.4% liked)

Technology

59374 readers
3463 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 85 points 1 month ago (5 children)

The results of this new GSM-Symbolic paper aren't completely new in the world of AI researchOther recent papers have similarly suggested that LLMs don't actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

WTF kind of reporting is this, though? None of this is recent or new at all, like in the slightest. I am shit at math, but have a high level understanding of statistical modeling concepts mostly as of a decade ago, and even I knew this. I recall a stats PHD describing models as "stochastic parrots"; nothing more than probabilistic mimicry. It was obviously no different the instant LLM's came on the scene. If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive...

[–] [email protected] 45 points 1 month ago (1 children)

It's written as if they literally expected AI to be self reasoning and not just a mirror of the bullshit that is put into it.

[–] [email protected] 39 points 1 month ago (2 children)

Probably because that's the common expectation due to calling it "AI". We're well past the point of putting the lid back on that can of worms, but we really should have saved that label for... y'know... intelligence, that's artificial. People think we've made an early version of Halo's Cortana or Star Trek's Data, and not just a spellchecker on steroids.

The day we make actual AI is going to be a really confusing one for humanity.

[–] [email protected] 11 points 1 month ago (2 children)

…a spellchecker on steroids.

Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓

[–] [email protected] 11 points 1 month ago

This problem is due to the fact that the AI isnt using english words internally, it's tokenizing. There are no Rs in {35006}.

[–] [email protected] 5 points 1 month ago (1 children)

That was both hilarious and painful.

And I don't mean to always hate on it - the tech is useful in some contexts, I just can't stand that we call it 'intelligence'.

[–] [email protected] 3 points 1 month ago

LLMs don’t see words, they see tokens. They were always just guessing

[–] [email protected] 12 points 1 month ago

describing models as “stochastic parrots”

That is SUCH a good description.

[–] [email protected] 6 points 1 month ago

Clearly this sort of reporting is not prevalent enough given how many people think we have actually come up with something new these last few years and aren't just throwing shitloads of graphics cards and data at statistical models

[–] [email protected] 6 points 1 month ago

If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive…

This is outrageous! I mean the pure gall of suggesting journalists should be something other than part of a human centipede!

[–] [email protected] 2 points 1 month ago

i think it's because some people have been alleging reasoning is happening or is very close to it