this post was submitted on 10 Dec 2023
256 points (97.8% liked)

Technology

59374 readers
2960 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 65 points 11 months ago* (last edited 11 months ago) (3 children)

Yet another reminder that LLM is not "intelligence" for any common definition of the term. The thing just scraped responses of other LLM and parroted it as its own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it's talking about, copying the simulated "personal implication" of the source.

In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.

[–] [email protected] 36 points 11 months ago (1 children)

Meat goes in. Sausage comes out.

The problem is that LLM are being sold as being able to turn meat into a black forest gateau.

[–] [email protected] 10 points 11 months ago

Absolutely true. But I suspect the problem is that the thing is too expensive to make to be sold as a sausage, so if they can't make it look like tasty confection they can't sell it at all.

[–] [email protected] 17 points 11 months ago (1 children)

Soon enough AI will be answering questions with only its own previous answers, meaning any flaws are hereditary to all future answers.

[–] [email protected] 9 points 11 months ago (1 children)

That’s already happening. What’s more is that training an llm on llm generated content degrades the llm for some reason. It’s becoming a mess.

[–] [email protected] 3 points 11 months ago

It's self correcting in that way at least. If AI generation runs rampant, it'll be kept in check by this phenomenon.

[–] [email protected] 8 points 11 months ago

Anyone that needs reminding that LLMs are not intelligent has bigger problems