this post was submitted on 09 Jan 2025
1989 points (98.3% liked)

Technology

68306 readers
4452 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1989
The erasure of Luigi Mangione (substack.evancarroll.com)
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Right now, on Stack Overflow, Luigi Magione’s account has been renamed. Despite having fruitfully contributed to the network he is stripped of his name and his account is now known as “user4616250”.

This appears to violate the creative commons license under which Stack Overflow content is posted.

When the author asked about this:

As of yet, Stack Exchange has not replied to the above post, but they did promptly and within hours gave me a year-long ban for merely raising the question. Of course, they did draft a letter which credited the action to other events that occurred weeks before where I merely upvoted contributions from Luigi and bountied a few of his questions.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 2 months ago (3 children)

It can reproduce an api. Can't solve actual problems. LLMs are completely incapable of innovation.

[–] [email protected] 7 points 2 months ago (1 children)

That's not really true though.. They come up with brand new sentences all the time.

[–] [email protected] 1 points 2 months ago (1 children)

No, they can only take from things in their models.

Moreover, all of them use statistics, typically Bayesian, to get the results. What you get from an LLM is essentially an average* of the model data. This is why feeding LLM output into a model is so toxic, it's already the average.

  • Yes I know it's not really the average, but for laymen us good enough comparison.
[–] [email protected] 2 points 2 months ago (1 children)

They only take from the statistical distributions of words in the context of preceding words (which is why they never say "the the" etc, why the grammar is nearly always correct). But that doesn't mean that whole sentences are lifted from the source material. There are near infinite paths through those word distributions, and many have never been produced by humans, so LLMs do produce sentences that have never been uttered before.

They couldn't produce new conceptual context spaces in the way that humans can sometimes, but they can produce new combinations within existing context spaces.

[–] [email protected] 1 points 2 months ago (1 children)

Except they are. You ask about Discworld characters and it gives you direct full quotes from the books.

[–] [email protected] 1 points 2 months ago

You realise "LLMs can quote verbatim" is not a contractdiction of "LLMs can create brand new sentences", right?

[–] [email protected] 0 points 2 months ago

It can but you to closely midwife it into doing so.

[–] [email protected] -5 points 2 months ago (2 children)

And yet the synthetic training data works, and models trained on it continue scoring higher on the benchmarks than ones trained on raw Internet data. Claim what you want about it, the results speak louder.

[–] [email protected] 1 points 2 months ago (1 children)

This is the peak, though. They require new data to get better but most of the available new data is adulterated with AI slop. Once they start eating themselves it's over.

[–] [email protected] 2 points 2 months ago (1 children)

You are speaking of "model collapse", I take it? That doesn't happen in the real world with properly generated and curated synthetic data. Model collapse has only been demonstrated in highly artificial circumstances where many generations of model were "bred" exclusively on the outputs of previous generations, without the sort of curation and blend of additional new data that real-world models are trained with.

There is no sign that we are at "the peak" of AI development yet.

[–] [email protected] 1 points 2 months ago (1 children)

We're already seeing signs of incestuous data input causing damage. The more that AI takes over, the less capable it will be.

[–] [email protected] 1 points 2 months ago

Are we, though? Newer models almost universally perform better than older ones, adjusted for scale. What signs are you seeing?

[–] [email protected] 1 points 2 months ago (1 children)

The results aren't worth the expense. So-called "AI" is the biggest bubble since the great recession.

[–] [email protected] 1 points 2 months ago (1 children)

Nah, I'm still giving that one to the blockchain. LLMs are going to be useful for a while, but Ethereum still hasn't figured out a real use, and they're the only ones that haven't given up and moved fully into coin gambling.

[–] [email protected] 1 points 2 months ago (1 children)

Blockchain currencies aren't a bubble, they're a scam.