this post was submitted on 26 Jan 2024
430 points (83.1% liked)
Technology
59312 readers
5184 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So by that logic. I prompted you with a question. Did I create your comment?
I used you as a tool to generate language. If it was a Pulitzer winning response could I gain the plaudits and profit, or should you?
If it then turned out it was plagiarism by yourself, should I get the credit for that?
Am I liable for what you say when I have had no input into the generation of your personality and thoughts?
The creation of that image required building a machine learning model.
It required training a machine learning model.
It required prompting that machine learning model.
All 3 are required steps to produce that image and all part of its creation.
The part copyright holders will focus on is the training.
Human beings are held liable if they see and then copy an image for monetary gain.
An AI has done exactly this.
It could be argued that the most responsible and controlled element of the process. The most liable. Is the input of training data.
Either the AI model is allowed to absorb the world and create work and be held liable under the same rules as a human artist. The AI is liable.
Or the AI model is assigned no responsibility itself but should never have been given copyrighted work without a license to reproduce it.
Either way the owners have a large chunk of liability.
If I ask a human artist to produce a picture of Donald Duck, they legally can't, even though they might just break the law Disney could take them to court and win.
The same would be true of any business.
The same is true of an AI as either its own entity, or the property of a business.
I'm not non-sentient construct that creates stuff.
...and when the copyright law was written there was no non-sentient things gererating stuff.
There is literally no way to prove whether you're sentient.
Decart found that limitation.
The only definition in law is whether you have competency to be responsible. The law assumes you do as an adult unless it's proven you don't.
Given the limits of AI the court is going to assume it to be a machine. And a machine has operators, designers, and owners. Those are humans responsible for that machine.
It's perfectly legitimate to sue a company for using a copyright breaking machine.
You almost seem like you get the problem, but then you flounder away.
Law hasn't caught up with the world with generative programs. A.I will not be considered sentient and they will have this same discussion in court.
It doesn't matter whether AI is sentient or not. It has a designer, trainer, and owner.
Once you prove the actions taken by the AI, even as just a machine, breach copyright liability is easily assigned.
Argee to disagree and time will tell, but you must see there are factors that haven't existed before in the history of humanity.
Who knows how the laws will change because of AI. But as the law currently stands it's just a matter of proving it to a court. That's the main barrier.
This is strong evidence an AI is breaking the law.
That joker could have been somebodys avatar picture with matching username.
A.I. can't understand copyright and useful A.I can't be build by protecting it from every material somebody thinks is their IP. It needs to learn to understand humans and needs human material to do so. Shitload of it. Who's up for some manual filtering?
If we go by NYTimes standards we better mothball the entire AI endeavor.
That's why it's a massive legal fight.
They'll delay a ruling as long as possible.
They're definitely developing a new model on vetted public domain data as we speak. They just need to delay legal action long enough to get that new model to launch.
This is the same thing YouTube did. Delay all copyright claims in court, blaming users, then put their copyright claim system in place that massively advantages IP owners.