this post was submitted on 05 Sep 2024
40 points (100.0% liked)

Selfhosted

39238 readers
310 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 week ago (3 children)

Cool idea. If this doesn't exist, and it probably doesn't, it sounds like a worthy project to get one's MSc or perhaps even PhD.

[–] [email protected] 2 points 1 week ago (1 children)

The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

[–] [email protected] 1 points 1 week ago (1 children)

Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you're better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

a blog post on this topic, applied to image classification:

https://jakobs.dev/solving-mnist-with-gzip/

[–] [email protected] 1 points 1 week ago (2 children)

I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don't know, the savings might be neglible, but I'd assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

I think you're overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

[–] [email protected] 2 points 1 week ago (1 children)

Wait.. this is exactly the problem a video codec solves. Scoot and give me some sample data!

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah. That's what an MP4 does, but I was just saying that first you have to figure out which images are "close enough" to encode this way.

[–] [email protected] 1 points 1 week ago (1 children)

It seems that we focus our interest in two different parts of the problem.

Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.

[–] [email protected] 1 points 1 week ago

and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between "similarity" and "compresses well". I bet if you read the paper, you'd see exactly why I chose to share it-- particularly the equations that define NID and NCD.

The difference between "seeing how well similar images compress" and figuring out "which of these images are similar" is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google "normalized compression distance" before spending any time implementing stuff, since it's very much been done before.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It's just expensive compared to other clustering algorithms.

My point in linking the paper is that "the probe" you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don't need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.

[–] [email protected] 0 points 1 week ago

Definitely PhD.

It's very much an ongoing and under explored area of the field.

One of the biggest machine learning conferences is actually hosting a workshop on the relationship between compression and machine learning (because it's very deep). https://neurips.cc/virtual/2024/workshop/84753

[–] [email protected] -5 points 1 week ago (2 children)

The problem is that OP is asking for something to automatically make decisions for him. Computers don't make decisions, they follow instructions.

If you have 10 similar images and want a script to delete 9 you don't want, then how would it know what to delete and keep?

If it doesn't matter, or if you've already chosen the one out of the set you want, just go delete the rest. Easy.

As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you're running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.

[–] [email protected] 5 points 1 week ago (1 children)

OP didn't want to delete anything, but to compress them all, exploiting the fact they're similar to gain efficiency.

[–] [email protected] -4 points 1 week ago (1 children)

Using that as an example. Same premise.

[–] [email protected] 0 points 1 week ago (1 children)

No, not really.

The problem is that OP is asking for something to automatically make decisions for him. Computers don't make decisions, they follow instructions.

The computer is not asked to make decisions like "pick the best image". The computer is asked to optimize, like with lossless compression.

[–] [email protected] -2 points 1 week ago (1 children)

That's not what he's asking at all

[–] [email protected] 1 points 1 week ago

yes, they are. reread the post, I just did so and I'm still confident

[–] [email protected] 1 points 1 week ago (1 children)

computers make decisions all the time. For example, how to route my packets from my instance to your instance. Classification functions are well understood in computer science in general, and, while stochastic, can be constructed to be arbitrarily precise.

https://en.wikipedia.org/wiki/Probably_approximately_correct_learning?wprov=sfla1

Human facial detection has been at 99% accuracy since the 90s and OPs task I'd likely a lot easier since we can exploit time and location proximity data and know in advance that 10 pictures taken of Alice or Bob at one single party are probably a lot less variant than 10 pictures taken in different contexts over many years.

What OP is asking to do isn't at all impossible-- I'm just not sure you'll save any money on power and GPU time compared to buying another HDD.

[–] [email protected] -2 points 1 week ago (1 children)

Everything you just described is instruction. Everything from an input path and desired result can be tracked and followed to a conclusory instruction. That is not decision making.

Again. Computers do not make decisions.

[–] [email protected] 1 points 1 week ago (1 children)

Agree to disagree. Something makes a decision about how to classify the images and it's certainly not the person writing 10 lines of code. I'd be interested in having a good faith discussion, but repeating a personal opinion isn't really that. I suspect this is more of a metaphysics argument than anything and I don't really care to spend more time on it.

I hope you have a wonderful day, even if we disagree.

[–] [email protected] 0 points 1 week ago (1 children)

It's Boolean. This isn't an opinion, it's a fact. Feel free to get informed though.

[–] [email protected] 1 points 1 week ago (1 children)

Then it should be easy to find peer reviewed sources that support that claim.

I found it incredibly easy to find countless articles suggesting that your Boolean is false. Weird hill to die on. Have a good day.

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=computer+decision+fairness&oq=computer+decison

[–] [email protected] 0 points 1 week ago (1 children)
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

"Learning" is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that's not "making a decision", then we aren't speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That's certainly an option, but generally I find it useful for words to mean things without getting too pedantic.

[–] [email protected] -1 points 1 week ago (1 children)

🙄

"Pedantic Asshole tries the whole 'You seem upset' but on the Internet and proceeds to try and explain their way out of being embarrassed about being wrong, so throws some idiotic semantics into a further argument while wrong."

Great headline.

Computers also don't learn, or change state. Apparently you didn't read the CS101 link after all.

Also, another newsflash is coming in here, one sec:

"Textbooks and course plans written by educators and professors in the fields they are experts in are not 'peer reviewed' and worded for your amusement, dipshit."

Whoa, that was a big one.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

I think there's probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn't understand a "for" loop is probably not very productive.

I came here to share some interesting material from my PhD research topic and you're calling me an asshole. It sounds like you did not have a wonderful day and I'm sorry for that.

Did you try learning about how computers learn things and make decisions? It's pretty neat