826
Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use
(venturebeat.com)
This is a most excellent place for technology news and articles.
I've only heard that running images through a VAE just once seems to break the Nightshade effect, but no one's really published anything yet.
You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.
VAE?
Think they mean a Variational AutoEncoder
Variable. But no running it through that will not break any effect