this post was submitted on 03 Nov 2023
123 points (86.8% liked)
Technology
59374 readers
3767 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI always generate different outputs for the same input (AI appears to be non-deterministic) so it would be impossible to confirm that exactly.
But I suppose what they mean is they appear to be of the same quality. Taking a longer time does not appear to decrease the quality of the output.
I suppose you could give an AI the same input resetting it after each input and then use statistical models to identify common traits. Then do the same thing on different hardware and run the same statistical analysis and see if there is a difference between group A in group B but as far as I'm aware no one has done this.
In theory hardware shouldn't matter, it's all mathematics basically and one plus one is always equal two, so there shouldn't be any fluctuations.
Yes, I suppose given equal input (model, keyword, seed, etc.) two Stable Diffusion installs should output same images; what I am curious about is whether the hardware configuration (e.g. gpu manufacturers) could result in traceable variations. As abuse of this tech gains prominence, tracing back the producer of a certain synthetic media by the specific hardware combination could become a thing.
While it could work like bullet forensics, where given access to the gun you can shoot it and compare it to the original bullet, there is no way to look at a generated image and figure out exactly what made it as there are simply way too many variables and random influences. Well, unless the creator is dumb enough to keep the metadata enabled, by default automatic1111 stable diffusion embeds all of it in the file itself as a png comment thingy.