this post was submitted on 24 Mar 2024
1237 points (97.3% liked)

Memes

45655 readers
1692 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 7 months ago (16 children)

Anyone know why most are a 2021 internet data cut off?

[–] [email protected] 19 points 7 months ago (3 children)

Training from scratch and retraining is expensive. Also, they want to avoid training on ML outputs as samples, they want primarily human made works as samples, and after the initial public release of LLMs it has become harder to create large datasets without ML stuff in them

[–] [email protected] 13 points 7 months ago* (last edited 7 months ago)

There was a good paper that came out recently saying that training on ml data will result in a collapse of cohesion. It's going to be real interesting, I don't know if they'll be able to train as easily ever again

[–] [email protected] 4 points 7 months ago

I recall spotting a few things about Image Generators having their training data contaminated using generated images, and the output becoming significantly worse. So yeah, I guess LLMs and IGA's need natural sources, or it gets more inbred than the Habsburgs.

[–] [email protected] 0 points 7 months ago

I think it's telling that they acknowledge that the stuff their bots churn out is often such garbage that training their bots on it would ruin them.

load more comments (12 replies)