this post was submitted on 27 Sep 2023
129 points (97.8% liked)

Technology

34883 readers
48 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 45 points 1 year ago* (last edited 1 year ago) (15 children)

Say, if you compress some data using these LLMs, how hard it is to decompress the data again without access to the LLM used to perform the compression? Is the compression "algorithm" used by the LLM will be the same for all runs (which means you probably can reverse engineer it to created a decompressor program), or will it be different every time it compress new data?

I mean, having to download a huge LLM to decompress some data, which probably also requires GPU with big VRAM, seems a bit much.

[–] [email protected] 28 points 1 year ago (5 children)

Skimming through the linked paper, I noticed this:

Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output.

So it sounds like the model parameters needed to decompress the file are included in the file itself.

[–] [email protected] 8 points 1 year ago (4 children)

So, you'll have to use the same LLM to decompress the data? For example, if your friend send you an archive compressed with this LLM, then you won't be able to decompress it without downloading the same LLM?

[–] [email protected] 6 points 1 year ago (1 children)

This is not dissimilar to regular compression algorithms. If I compress a folder using the 7zip format (.7z) the end user needs to use 7zip to decompress it since it is a proprietary algorithm. (I know Windows 11 is getting 7zip support)

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (2 children)

Except LLMs tend to be very big compared to standard decompression programs and often requires GPU with adequate VRAM in order to work reasonably fast enough. This is a very big usability issue IMO. If decompression can be done with a smaller and faster program (maybe also generated by the LLM?), it can be very useful and see pretty wide adoption (e.g. for future game devs who want to reduce their game size from 150GB to 130GB).

[–] [email protected] 3 points 1 year ago

I don't know how this would apply to decompression models in actuality, but in general, deep learning is VRAM intensive only during the training process, that's because they train multiple batches of data at once for generalization, and all those batches of data need to be stored in ram.
But once the model is trained, the end user is only going to input data one by one, so VRAM usually is not an issue. There are also light weight models that are designed to be run on lower end hardware.

[–] [email protected] 3 points 1 year ago

Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.

The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.

load more comments (2 replies)
load more comments (2 replies)
load more comments (11 replies)