Are you running these llms in containers completely cut off from the internet? My understanding was that the “local first” llms aren’t truly offline and only try and answer base queries offline before contacting their provider for support. This invalidating the privacy argument.
jonno
joined 1 year ago
Do you by any chance faces a guide on how to get that running?
Do you need to reboot after every update or updates applied live? Only have experience with rpm-ostree on kinoite
AIO is the way
Or use caddy with a dns challenge. No need to open any ports and just use it completely locally without any annoying warning.
You need to set a trigger for tubearchivist-jf to run after you’ve downloaded a video.
The docker compose ist almost copy paste. Not quite sure what the difficulty is?
Check out Demus
I’ve had major problems for days like half a year ago. Switched to dynv6.net with no issues so far.
Caddy combined with dns challenges are the dream!
Z or z. Depends on the scenario
view more: next ›
Are you loosing the mounts after a reboot? As in, are you mounting via /etc/fstab?