this post was submitted on 01 Sep 2023
192 points (93.6% liked)
Technology
59287 readers
5186 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It seems pretty obvious to me that the artists should win this assuming their images weren't poorly licenced. Training AI is absolutely a commercial use.
These companies adopted a run fast and don't look back legal strategy and now they're going to enter the 'find out' phase.
This is a pretty old story, the EFF already weighed in on it back in april.
"The Stable Diffusion model makes four gigabytes of observations regarding more than five billion images. That means that its model contains less than one byte of information per image analyzed (a byte is just eight bits—a zero or a one)."
What a great article, it really lays it out well and concisely. I like the above point especially.
Yeah, there's gold wherever you look. I like:
I would like to agree with you, but I have doubts this lawsuit will stick because of how prominent corporations are in US law.
There's nothing in copyright law that covers this scenario, so anyone that says it's "absolutely" one way or the other is telling you an opinion, not a fact.
It's like sueing an artist because they learnt to paint based on your paintings. But also not because the company has acquired your art and fed it into an application.
It's a very tricky area.
I don't think it's obvious at all. Both legally speaking - there is no consensus around this issue - and ethically speaking because AIs fundamentally function the same way humans do.
We take in input, some of which is bound to be copyrighted work, and we mesh them all together to create new things. This is essentially how art works. Someone's "style" cannot be copyrighted, only specific works.
The government announced an inquiry recently into the copyright questions surrounding AI. They are going to make recommendations to congress about potential legislation, if any, they think would be a good idea. I believe there's a period of public comment until mid October, if anyone wants to write a comment.
I really hope you're wrong.
And I think there's a difference. Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.
AI art, literally cannot do anything, without human training data. It can't take a previous result, be inspired by it, and make it better. There has to be actual human input, it can't train itself on its own data, the way humans do. It absolutely does not "work the same way".
AI art has NEVER made me feel like it's greater than the sum of its parts. Unlike art made by humans, which makes me feel that way all the time.
If a human does art without input, you still get "something".
With an AI, you don't have that. Without the training data, you have nothing.
Ok, take a human being that has never had any other interactions with any other human and has never consumed any content created by humans. Give him finger paint and have him paint something on a blank canvas. I think it wouldn't look any different than a chimpanzee doing finger paint.
In theory, it could. You would just need a way to quantify the "fitness" of a drawing. They do this by comparing to actual content. But you don't need actual content in some circumstances. For example, look at Alphazero - Deepmind's AI from a few years back for playing chess. All the AI knew was the rules of the game. It did not have access to any database of games. No data. The way it learned is it played millions of games against itself.
It trained itself on its own data. And that AI, at the time, beat the leading chess engine that has access to databases and other pre-built algorithms.
With art this gets trickier because art is subjective. You can quantify clearly whether you won or lost a chess game. How do you quantify if something is a good piece of art? If we can somehow quantify this, you could in theory create AI that generates art with no input.
We're in the infancy stages of this technology.
AI can do all of the same. I know it's scary but it's here and it isn't going away. AI designed systems are becoming more and more commonplace. Solar panels, medical devices, computer hardware, aircraft wings, potential drug compounds, etc. Certain things AI can be really good at, and designing things and testing it in a million different simulations is something that AI can do a lot better than humans.
What is art? If I make something that means nothing and you find a meaning in it, is it meaningful? AI is a cold calculated mathematical model that produces meaningless output. But humans love finding patterns in noise.
Trust me, you will eventually see some sort of AI art that makes an impact on you. Math doesn't lie. If statistics can turn art into data and find the hidden patterns that make something impactful, then it can recreate it in a way that is impactful.
The randomness used by current machine learning to train the neural networks, will never be able to do what a human does when they are being creative.
I have no doubt AI art will be able "say" things. But it wont be saying things, that haven't already been said.
And yes, AI can brute force its way to solutions in ways humans cannot beat. But that only works when there is a solution. So AI works with science, engineering, chess.
Art does not have a "solution". Every answer is valid. Humans are able to create good art, because they understand the question. "What is it to be human?" "Why are we here?" "What is adulthood?" "Why do I feel this?" "What is innocence?"
AI does not understand anything. All it is doing is mimicking art already created by humans, and coincidentally sometimes getting it right.
It's not brute force. It seems like brute force because trying something millions of times seems impossible to us. But they identify patterns and then use those patterns to create output. It's learning. It's why we call it "machine learning". The mechanics are different than how humans do it, but fundamentally it's the same.
The only reason you know what a tree looks like is because you've seen a million different trees. Trees in person, trees in movies, trees in cartoons, trees in drawings, etc. Your brain has taken all of these different trees and merged them together in your brain to create an "ideal" of the tree. Sort of like Plato's "world of forms"
AI can recognize a tree through the same process. It views millions of trees and creates an "ideal" tree. It can then compare any image it sees against this ideal and determine the probability that it is or isn't a tree. Combine this with something that randomly pumps out images and you can now compare these generated images with the internal model of a tree and all of a sudden you have an AI that can create novel images of trees.
It's fundamentally the same thing we do. It's creating pictures of trees that didn't exist before. The only difference is it happens in a statistical model and it happens at a larger and faster scale than humans are capable of.
This is why the question of AI models having to pay copyright for content it parses is not obvious at all.
If every answer is valid then you would be sitting here saying that AI art is just as valid as anything else.
This is a tough one, because they are not directly making money from the copyrighted material.
Isn't this a bit same as using short samples of somebodys song in your own song or somebody getting inspired from somebodys artwork and creating something similar.
If you're sampling music you aught to be compensating the licence holder unless it's public domain or your work is under a fair use exception.
Sampling music is literally placing parts of that music in the final product. Gen AI is not placing pieces of other people's art in the final image, in fact it doesn't store any image data at all. Using an image in the training data is akin to an artist including that image on their moodboard. Except the AI's moodboard has way more images and the odds of the work being too similar to a single particular image is lower than when a human does it.
Are you speaking legally or morally when you say someone "aught" to do something?