this post was submitted on 13 Jun 2025
890 points (99.2% liked)

Technology

71396 readers
3971 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 21 hours ago (2 children)

If the customer support of my ISP doesn't even know what CGNAT is, but AI knows, I am actually troubled whether this is a good move or not.

[–] [email protected] 9 points 21 hours ago

Try asking for a level 2 support tech. They'll normally pass your call to someone competent without any fuss.

[–] [email protected] 3 points 21 hours ago* (last edited 21 hours ago) (1 children)

See thats just it, the AI doesn't know either it just repeats things which approximate those that have been said before.

If it has any power to make changes to your account then its going to be mistakenly turning peoples services on or off, leaking details, etc.

[–] [email protected] 2 points 20 hours ago (1 children)

it just repeats things which approximate those that have been said before.

That's not correct and over simplifies how LLMs work. I agree with the spirit of what you're saying though.

[–] [email protected] 1 points 20 hours ago (1 children)

You're wrong but I'm glad we agree.

[–] [email protected] 2 points 17 hours ago (1 children)

I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

If you really want a better understanding, watch this video:

https://youtu.be/UKcWu1l_UNw

And before your next response starts with "but Apple..."

Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

Apple's paper on LLMs is completely biased in their favour.

[–] [email protected] 0 points 15 hours ago (2 children)

Defining contextual relationship between words sounds like predicting the next word in a set, mate.

[–] [email protected] 1 points 14 hours ago

Only because it is.

[–] [email protected] 0 points 14 hours ago

Not at all. It's not "how likely is the next word to be X". That wouldn't be context.

I'm guessing you didn't watch the video.