Great write up, thanks. For video learners, Wolfgang does a good step-by-step on YouTube
thirdBreakfast
I'd love you to check back later with your conclusions.
Guide to Self Hosting LLMs with Ollama.
- Download and run Ollama
- Open a terminal, type
ollama run llama3.2
If it's an M1, you def can and it will work great. With Ollama.
Thanks, I ended up going with Garage, but it has the same issue. I assumed I could just specify some buckets with their keys in the docker-compose or garage.toml, but no - they had to be done through the api or command line.
This is correct, I already installed the minio cli, but when I came back and read this, I tried it out and yes, once garage is running in the container, you can
alias garage="docker exec -ti <container name> /garage"
so you can do the cli things like garage bucket info test-bucket
or whatever. The --help
for the garage
command is pretty great, which is good since they don't write it up much in the docs.
Thanks. I ended up going with Garage (in Docker), and installed the minio client cli for these tasks.
One I'm writing. I use the host file system (as I have a strong preference for simple) for it's storage, but I'm interested in adding Litestream for replicating the database onto AWS.
"Convert this text to make it sound like from a random person: "
Love the effort you've put into this question. You've clearly done some quality research and thinking.
When I asked myself this same question a couple of years ago, I ended up just buying a second hand Synology NAS to use alongside my mini-pc. That would meet your criteria, and avoids the (I'm not sure what magnitude) reliability risk of using disks connected over USB. It's more proprietary than I'd like, but it's battle tested and reliable for me.
starcoder2:latest f67ae0f64584 1.7 GB 3 days ago
phi3:latest d184c916657e 2.2 GB 3 weeks ago
deepseek-coder-v2:latest 8577f96d693e 8.9 GB 3 weeks ago
llama3:8b-instruct-q8_0 1b8e49cece7f 8.5 GB 3 weeks ago
dolphin-mistral:latest 5dc8c5a2be65 4.1 GB 3 weeks ago
codeqwen:latest df352abf55b1 4.2 GB 3 weeks ago
llama3:latest 365c0bd3c000 4.7 GB 4 weeks ago
I mostly use starcoder2 with Continue for code autocomplete, the big deepseek coder is a bit slow (I can feel it thinking), but it and the regular llama3 are good for chatbot type programming questions.
I don't really have anything to compare the M1 performance to. I guess the 8GB models output text a little slower than the web versions of the same models, and the 4GB ones about the same. Using ollama in the terminal, there's sometimes a 0.5-2 second pause before it starts outputting. Not with phi3 though - it's surprisingly snappy for the quality of answers.
Build anything small into a container on your laptop, push it to DockerHub or the Github package registry then host it on fly.io for free.