this post was submitted on 28 Jan 2024
380 points (95.2% liked)

Technology

60080 readers
3347 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

GenAI tools ‘could not exist’ if firms are made to pay copyright::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 11 months ago (1 children)

The Kit Walsh article purposefully handwaves around a couple of issues that could present larger issues as law suits in this arena continue.

  1. He says that due to the size of training data and the model, only a byte of data per image could be stored in any compressed format, but this assumes all training data is treated equally. It's very possible certain image artifacts are compressed/stored in the weights more than other images.

  2. These models don't produce exact copies. Beyond the Getty issue, nytimes recently released an article about a near duplicate - https://www.nytimes.com/interactive/2024/01/25/business/ai-image-generators-openai-microsoft-midjourney-copyright.html.

I think some of the points he makes are valid, but they're making a lot of assumptions about what is actually going on in these models which we either don't know for certain or have evidence to the contrary.

I didn't read Katherine's article so maybe there is something more there.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

She addresses both of those, actually. The Midjourney thing isn't new, It's the sign of a poorly trained model.

[–] [email protected] 2 points 11 months ago (1 children)

I'm not sure she does, just read the article and it focuses primarily what models can train on. However, the real meat of the issue, at least I think, with GenAI is what it produces.

For example, if I built a model that just spit out exact frames from "Space Jam", I don't think anyone would argue that would be a problem. The question is where is the line?

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

This part does:

It’s not surprising that the complaints don’t include examples of substantially similar images. Research regarding privacy concerns suggests it is unlikely it is that a diffusion-based model will produce outputs that closely resemble one of the inputs.

According to this research, there is a small chance that a diffusion model will store information that makes it possible to recreate something close to an image in its training data, provided that the image in question is duplicated many times during training. But the chances of an image in the training data set being duplicated in output, even from a prompt specifically designed to do just that, is literally less than one in a million.

The linked paper goes into more detail.

On the note of output, I think you’re responsible for infringing works, whether you used Photoshop, copy & paste, or a generative model. Also, specific instances will need to be evaluated individually, and there might be models that don't qualify. Midjourney's new model is so poorly trained that it's downright easy to get these bad outputs.

[–] [email protected] 1 points 11 months ago (2 children)

This goes back to my previous comment of handwaving away the details. There is a model out there that clearly is reproducing copyrighted materials almost identically (nytimes article), we also have issues with models spitting out training data https://www.wired.com/story/chatgpt-poem-forever-security-roundup/. Clearly people studying these models don't fully know what is actually possible.

Additionally, it only takes one instance to show that these models, in general, can and do have issues with regurgitating copyrighted data. Whether that passes the bar for legal consequences we'll have to see, but i think it's dangerous to take a couple of statements made by people who don't seem to understand the unknowns in this space at face value.

[–] [email protected] 4 points 10 months ago

The ultimate issue is that the models don't encode the training data in any way that we historically have considered infringement of copyright. This is true for both transformer architectures (gpt) and diffusion ones (most image generators). From a lay perspective, it's probably good and relatively accurate for our purposes to imagine the models themselves as enormous nets that learn vague, muddled, impressions of multiple portions of multiple pieces of the training data at arbitrary locations within the net. Now, this may still have IP implications for the outputs and here music copyright is pretty instructive, albeit very case-by-case. If a piece is too "inspired" by a particular previous work, even if it is not explicit copying it may still be regarded as infringement of copyright. But, like I said, this is very case specific and precedent cuts both ways on it.

[–] [email protected] 1 points 11 months ago

The article dealt with Stable Diffusion, the only open model that allowed people to study it. If there were more problems with Stable Diffusion, we'd've heard of them by now. These are the critical solutions Open-source development offers here. By making AI accessible, we maximize public participation and understanding, foster responsible development, as well as prevent harmful control attempts.

As it stands, she was much better informed than you are and is an expert in law to boot. On the other hand, you're making a sweeping generalization right into an appeal to ignorance. It's dangerous to assert a proposition just because it has not been proven false.