this post was submitted on 05 Oct 2023
171 points (89.4% liked)

Technology

59374 readers
3040 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/3974080

Hey everyone. I made a casual survey to see if people can tell the difference between human-made and AI generated art. Any responses would be appreciated, I'm curious to see how accurately people can tell the difference (especially those familiar with AI image generation)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 1 year ago (11 children)

14 / 20 here. I dunno why there are so many people, particularly on Reddit, who absolutely hate AI art. Yeah some of it can look janky, uncanny valley, or such but a lot of it looks really damn cool.

And not all of us have talents to create visual art of our own so text creation is much more accessible for us to explore our imaginations. Or lack the money to commission pieces from human artists.

[–] [email protected] 0 points 1 year ago (4 children)

Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you're just exploiting labor without consent.

(Also if you're just making random images for yourself, w/e)

((Also also, text models are a separate debate and imo much worse considering they're literally misinformation generators))

Note: if anybody wants to reply with "actually AI models learn like people so it's fine", please don't. No they don't. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.

[–] [email protected] 3 points 1 year ago (3 children)

This paper is just about stock photos or video game art with enough dupes or variations that they didn't get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn't really go to proving if diffusion models learn like humans or not. Not that I think they do.

[–] [email protected] 1 points 1 year ago (1 children)

Sure, it's not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it's not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).

[–] [email protected] 1 points 1 year ago (1 children)

Non-overfitted images would still have this effect (to a lesser extent),

This is a bold claim to make with no evidence. When every trained image accounts for less than one byte of data in the model. Even the tiniest images file contain many thousands of bytes. One byte isn't even enough to store a single character of text, most Latin-based alphabets and some symbols, use two bytes.

and this would never happen to a human.

There are plenty of artists that get stuck with same-face. Like Sam Yang for instance. Then there are the others who can't draw disabled people or people of color. If it isn't a beautiful white female character, they can't do it. It can take a lot of additional training for people to break out of their rut, some don't.

I'm not going to tell you that latent diffusion models learn like humans, but they are still learning. https://arxiv.org/pdf/2306.05720.pdf Have a source.

I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards in the US now need a warrant to search your phone.

This guy also does a pretty good job of explaining how latent diffusion models work, You should give this a watch too.

[–] [email protected] 2 points 1 year ago

Here is an alternative Piped link(s):

explaining how latent diffusion models work

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

load more comments (1 replies)
load more comments (1 replies)
load more comments (7 replies)