this post was submitted on 22 Aug 2023
9 points (90.9% liked)
Technology
59440 readers
5532 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't get why this is an issue. Assuming they purchased a legal copy that it was trained on then what's the problem? Like really. What does it matter that it knows a certain book from cover to cover or is able to imitate art styles etc. That's exactly what people do too. We're just not quite as good at it.
A copyright holder has the right to control who has the right to create derivative works based on their copyright. If you want to take someone's copyright and use it to create something else, you need permission from the copyright holder.
The one major exception is Fair Use. It is unlikely that AI training is a fair use. However this point has not been adjudicated in a court as far as I am aware.
It is not a derivative it is transformative work. Just like human artists "synthesise" art they see around them and make new art, so do LLMs.
this is so fucking stupid though. almost everyone reads books and/or watches movies, and their speech is developed from that. the way we speak is modeled after characters and dialogue in books. the way we think is often from books. do we track down what percentage of each sentence comes from what book every time we think or talk?