this post was submitted on 19 Aug 2024
531 points (98.2% liked)

Technology

60052 readers
2966 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 4 months ago (1 children)

The context is not the same. A snippet is incomplete and often lacking important details. It's minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn't require you to click on any sources; you simply ask follow up questions.

With Gemini?

Yes. How do you think the Gemini model understands language in the first place?

[–] [email protected] 0 points 4 months ago

The context is not the same.

It's not the same but it's similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.

Google, as by far the web's largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I'm missing something but I just don't see the business sense in Google not doing that and so far I don't see anything approximating convincing arguments.

Yes. How do you think the Gemini model understands language in the first place?

Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. "the Gemini model understands language" in itself hardly is proof of any wrongdoing. I don't claim to have perfect knowledge or memory, so it's certainly possible that I missed more specific evidence but "the Gemini model understands language" by itself definitively is not.