this post was submitted on 24 Jul 2024
249 points (97.0% liked)

Technology

59207 readers
2520 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 97 points 3 months ago (5 children)

Yep. It leads to a positive feedback loop. They just continue to self-reinforce whatever came out before.

And with increasing amounts of the internet being polluted with AI text output....

[–] [email protected] 89 points 3 months ago (3 children)
[–] [email protected] 62 points 3 months ago
[–] [email protected] 14 points 3 months ago (3 children)
[–] [email protected] 29 points 3 months ago

In the USA, they call it the AlaLlama model.

[–] [email protected] 5 points 3 months ago
[–] [email protected] 1 points 3 months ago

What about the Grrr! model after that astoundingly XD So Random! thing from Invader Zim?

He's an android or robot, right?

[–] [email protected] 17 points 3 months ago

That seems so obviously predictable.

[–] [email protected] 16 points 3 months ago (1 children)

To be fair this doesn't sound much different than your average human using the internet.

[–] [email protected] 4 points 3 months ago

2024, Reverse Turing Test Challenge:

Can an LLM AI differentiate between human input and LLM AI input?

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago)

You have to pretty much intentionally give it enough synthetic data to wreck it. OpenAI and Anthropic train their models on generated data to improve them. As long as there's supervision during training, which there always will be, this isn't really a problem.

https://openai.com/index/prover-verifier-games-improve-legibility/

https://www.anthropic.com/research/claude-character

[–] [email protected] 8 points 3 months ago

Well... Its built on statistics and statistical inference will return to the mean eventually. If all it ever gets to train on is closer and closer to the mean, there will be nothing left to work with. It will all be the average...