this post was submitted on 15 Jun 2024
122 points (93.0% liked)

Privacy

31975 readers
740 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 5 months ago* (last edited 5 months ago) (1 children)

Not just LLMs but all kinds of models are equivlant to freeware, aka the model itself and other essential bits for it to work. I won't even call it source avaliable as there is no source.

Take redis as example. I can still go grab the source and compile a binary that works. This doesn't applies on ML models.

Of course one can argue the training process isn't determistic thus even with the exact training corpus, it can't create the same model in terms of bits on mulitple runs. However, I would argue the same corpus provide the chance to train a model of similar or equivalent performance. Hence the openness of the training corpus is an absolute requirement to qualify a model being FOSS.

[–] [email protected] 3 points 5 months ago (1 children)

I've seen this said multiple times, but I'm not sure where the idea that model training is inherently non-deterministic is coming from. I've trained a few very tiny models deterministically before...

[–] [email protected] 1 points 5 months ago (1 children)

You sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?

[–] [email protected] 1 points 5 months ago

Yes of course, there's nothing gestalt about model training, fixed inputs result in fixed outputs