this post was submitted on 27 May 2024
1101 points (98.0% liked)

Technology

59390 readers
2617 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 338 points 5 months ago (65 children)

They keep saying it's impossible, when the truth is it's just expensive.

That's why they wont do it.

You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

[–] [email protected] 54 points 5 months ago (11 children)

I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you've scientifically ever seen. So while it absolutely makes sense to say, let's just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

[–] [email protected] 0 points 5 months ago (1 children)

"Most published journal articles are horseshit, so I guess we should be okay with this too."

[–] [email protected] 1 points 5 months ago

No, it's simply contradicting the claim that it is possible.

We literally don't know how to fix it. We can put on bandaids, like training on "better" data and fine-tune it to say "I don't know" half the time. But the fundamental problem is simply not solved yet.

load more comments (9 replies)
load more comments (62 replies)