this post was submitted on 08 May 2024
1717 points (99.3% liked)

Technology

60033 readers
2990 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 106 points 7 months ago (3 children)

That license would require chatgpt to provide attribution every time it used training data of anyone there and also would require every output using that training data to be placed under the same license. This would actually legally prevent anything chatgpt created even in part using this training data from being closed source. Assuming they obviously aren't planning on doing that this is massively shitting on the concept of licensing.

[–] [email protected] 25 points 7 months ago* (last edited 7 months ago) (1 children)

CC attribution doesn't require you to necessarily have the credits immediately with the content, but it would result in one of the world's longest web pages as it would need to have the name of the poster and a link to every single comment they used as training data, and stack overflow has roughly 60 million questions and answers combined.

[–] [email protected] 1 points 7 months ago (1 children)

They don't need to republish the 60 million questions, they just have to credit the authors, which are surely way fewer (but IANAL)

[–] [email protected] 1 points 7 months ago

appropriate credit — If supplied, you must provide the name of the creator and attribution parties, a copyright notice, a license notice, a disclaimer notice, and a link to the material. CC licenses prior to Version 4.0 also require you to provide the title of the material if supplied, and may have other slight differences.

Maybe that could be just a link to the user page, but otherwise I would see it as needing to link to each message or comment they used.

[–] [email protected] 16 points 7 months ago (2 children)

IF its outputs are considered derivative works.

[–] [email protected] 20 points 7 months ago (1 children)

Ethically and logically it seems like output based on training data is clearly derivative work. Legally I suspect AI will continue to be the new powerful tool that enables corporations to shit on and exploit the works of countless people.

[–] [email protected] 2 points 7 months ago

The problem is the legal system and thus IP law enforcement is very biased towards very large corporations. Until that changes corporations will continue, as they already were, exploiting.

I don't see AI making it worse.

[–] [email protected] 1 points 7 months ago

They are not. A derivative would be a translation, or theater play, nowadays, a game, or movie. Even stuff set in the same universe.

Expanding the meaning of "derivative" so massively would mean that pretty much any piece of code ever written is a derivative of technical documentation and even textbooks.

So far, judges simply throw out these theories, without even debating them in court. Society would have to move a lot further to the right, still, before these ideas become realistic.

[–] [email protected] 4 points 7 months ago

Maybe but I don’t think that is well tested legally yet. For instance, I’ve learned things from there, but when I share some knowledge I don’t attribute it to all the underlying sources of my knowledge. If, on the other hand, I shared a quote or copypasta from there I’d be compelled to do so I suppose.

I’m just not sure how neural networks will be treated in this regard. I assume they’ll conveniently claim that they can’t tie answers directly to underpinning training data.