Ah, no not the template files for the individual containers, but the project descriptors are just compose files.
WalnutLum
They're 1-1 compose files.
The app just saves them as compose files and then runs docker compose in the backend.
it is EXTREMELY barebones
In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that...
This kind of goes against the model that open source has operated on for a long time, as providing source doesn't represent liability. So providing a fine-tuned model shouldn't either.
ChatGpt already is multiple smaller models. Most guesses peg chatgpt4 as a 8x220 Billion parameter mixture of experts, or 8 220 billion parameter models squished together
My one dark hope is AI will be enough of an impetus for somebody to update DMCA
> pay once, get access to everything everywhere
> thinks about Elsevier
OH GOD PLEASE NO
This is interesting but I'll reserve judgement until I see comparable performance past 8 billion params.
All sub-4 billion parameter models all seem to have the same performance regardless of quantization nowadays, so 3 billion is a little hard to see potential in.
I seriously doubt the viability of this, but I'm looking forward to being proven wrong.
I would recommend instead to use the AI Horde: https://stablehorde.net/ It's a collection of people hosting stable diffusion/text generation models
There's also openrouter which can connect to ChatGPT with a token-based system. (They check your prompts for hornyposting though)
You wouldn't necessarily punish the person that modified Linux either, you'd punish the person that uses it for a nefarious purpose.
Important distinction is the intention to deceive, not that the code/model was modified to be able to be used for nefarious purposes.