Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
VM's have much bigger overhead, for one. And VM's are less reproducible too. If you had to set up a VM again, do you have all the steps written down? Every single step? Including that small "oh right" thing you always forget? A Dockerfile is basically just a list of those steps, written in a way a computer can follow. And every time you build an image in docker, it just plays that list and gives you the resulting file system ready to run.
It's incredibly practical in some cases, let's say you want to try a different library or upgrade a component to a newer version. With VM's you could do it live, but you risk not being able to go back. You could make a copy or make a checkpoint, but that's rather resource intensive. With docker you just change the Dockerfile slightly and build a new image.
The resulting image is also immutable, which means that if you restart the docker container, it's like reverting to first VM checkpoint after finished install, throwing out any cruft that have gathered. You can exempt specific file and folders from this, if needed. So every cruft and change that have happened gets thrown out except the data folder(s) for the program.
I'm not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it's not like I'm running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860's which are nearly 20 years old now.
If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn't take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I've never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn't be any chance of it interfering with anything else.
Honestly from what you're saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.
Nine. How much ram do they use? How much disk space? Try running 90, or 900. Currently, on my personal hobby kubernetes cluster, there's 83 different instances running. Because of the low overhead, I can run even small tools in their own container, completely separate from the rest. If I run say.. a postgresql server.. spinning one up takes 90mb disk space for the image, and about 15 mb ram.
I worked at a company that did - among other things - hosting, and was using VM's for easier management and separation between customers. I wasn't directly involved in that part day to day, but was friend with the main guy there. It was tough to manage. He was experimenting with automatic creating and setting up new VM's, stripping them for unused services and files, and having different sub-scripts for different services. This was way before docker, but already then admins were looking in that direction.
So aschually, docker is kinda made for people who runs things in VM's, because that is exactly what they were looking for and duct taping things together for before docker came along.
Yeah I can see the advantage if you're running a huge number of instances. In my case it's all pretty small scale. At work we only have a single server that runs a web site and database so my home setup puts that to shame, and even so I have a limited number of services I'm working with.
Yeah, it also has the effect that when starting up say a new postgres or web server is one simple command, a few seconds and a few mb of disk and ram, you do it more for smaller stuff.
Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository. Or when a service needs a DB, just start a new one just for that. And if that file analyzer ran in it's own image instead of being part of the web service, you could scale that separately.. oh, and it needs a redis instance and a rabbitmq server, that's two more containers, that serves just that web service. And so on..
Things that were a huge hassle before, like separate mini vm's for each sub-service, and unique sub-services for each service doesn't just become practical but easy. You can define all the services and their relations in one file and docker will recreate the whole stack with all services with one command.
And then it also gets super easy to start more than one of them, for example for testing or if you have a different client. .. which is how you easily reach a hundred instances running.
So instead of a service you have a service blueprint, which can be used in service stack blueprints, which allows you to set up complex systems relatively easily. With a granularity that would traditionally be insanity for anything other than huge, serious big-company deployments.
Well congrats, you are the first person who has finally convinced me that it might actually be worth looking at even for my small setup. Nobody else has been able to even provide a convincing argument that docker might improve on my VM setup, and I've been asking about it for a few years now.
It's a great tool to have in the toolbox. Might take some time to wrap your head around, but coming from vm's you already have most of the base understanding.
From a VM user's perspective, some translations:
A small tip: you can "exec" into a running container, which will run a command inside that container. Combined with interactive (-i) and terminal (-t) flags, it's a good way to get a shell into a running container and have a look around or poke things. Sort of like getting a shell on a VM.
One thing that's often confusing for new people are image tags. Partially because it can mean two things. For example "postgres" is a tag. That is attached to an image. The actual "name" of an image is it's Sha sum. An image can have multiple tags attached. So far so good, right?
Now, let's get complicated. The actual tag, the full tag for "postgres" is actually "docker.io/postgres:latest". You see, every tag is a URL, and if it doesn't have a domain name, docker uses it's own. And then we get to the ": latest" part. Which is called a tag. Yup. All tags have a tag. If one isn't given, it's automatically set to "latest". This is used for versioning and different builds.
For example postgres have tags like "16.1" which points to latest 16.1.x version image, built on postgres maintainers' preferred distro. "16.1-alpine" that point to latest Alpine based 16.1.x version. "16" for latest 16.x.x version, "alpine" for latest alpine based version, be it 16 or 17 or 18.. and so on. You can find more details here.
The images on docker hub are made by .. well, other people. Often the developers of that software themselves, sometimes by docker, sometimes by random people. You can make your own account there, it's free. If you do, make an image and pushes it, it will be available as shdwdrgn/name - if it doesn't have a user component it's maintained / sanctioned by docker.
You can also run your own image repository service, as long as it has https with valid cert. Then it will be yourdomain.tld/something
So that was a brief introduction to the strange World of docker. Docker is a for profit company, btw. But the image format is standardized, and there's fully open source ways to make and run images too. At the top of my head, podman and Kubernetes.
One thing I'm not following in all the discussions about how self-contained docker is... nearly all of my images make use of NFS shares and common databases. For example, I have three separate smtp servers which need to put incoming emails into the proper home folders, but also database connections to track detected spam and other things. So how would all these processes talk to each other if they're all locked within their container?
The other thing I keep coming back to, again using my smtp servers as an example... It is highly unlikely that anyone else has exactly the same setup that I do, let alone that they've taken the time to build a docker image for it. So would I essentially have to rebuild the entire system from scratch, then learn how to create a docker script to launch it, just to get the service back online again?
For the nfs shares, there's generally two approaches to that. First is to mount it on host OS, then map it in to the container. Let's say the host has the nfs share at /nfs, and the folders you need are at /nfs/homes. You could do "docker run -v /nfs/homes:/homes smtpserverimage" and then those would be available from /homes inside the image.
The second approach is to set up NFS inside the image, and have that connect directly to the nfs server. This is generally seen as a bad idea since it complicates the image and tightly couples the image to a specific configuration. But there are of course exceptions to each rule, so it's good to keep in mind.
With database servers, you'd have that set up for accepting network connections, and then just give the address and login details in config.
And having a special setup.. How special are we talking? If it's configuration, then that's handled by env vars and mapping in config files. If it's specific plugins or compile options.. Most built images tend to cast a wide net, and usually have a very "everything included" approach, and instructions / mechanics for adding plugins to the image.
If you can't find what you're looking for, you can build your own image. Generally that's done by basing your Dockerfile on an official image for that software, then do your changes. We can again take the "postgres" image since that's a fairly well made one that has exactly the easy function for this we're looking for.
So if you have a .sh script that does some extra stuff before the DB starts up, let's say "mymagicpostgresthings.sh" and you want an image that includes that, based on Postgresql 16, you could make this Dockerfile in the same folder as that file:
and when you run "docker build . -t mymagicpostgres" in that folder, it will build that image with your file included, and call it "mymagicpostgres" - which you can run by doing "docker run -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 mymagicpostgres"
In some cases you need a more complex approach. For example I have an nginx streaming server - which needs extra patches. I found this repository for just that, and if you look at it's Dockerfile you can see each step it's doing. I needed a bit of modifications to that, so I have my own copy with different nginx.conf, an extra patch it downloads and applies to the src code, and a startup script that changes some settings from env vars, but that had 90% of the work done.
So depending on how big changes you need, you might have to recreate from scratch or you can piggyback on what's already made. And for "docker script to launch it" that's usually a docker-compose.yml file. Here's a postgres example:
If you run "docker compose up -d" in that file's folder it will cause docker to download and start up the images for postgres and adminer, and port forward in 8080 to adminer. From adminer's point of view, the postgres server is available as "db". And since both have "restart: always" if one of them crashes or the machine reboots, docker will start them up again. So that will continue running until you run "docker compose down" or something catastrophic happens.
Hey I wanted to say thanks for all the info and I've saved this aside. Had something come up that is requiring all my attention so I just got around to reading your message but it looks like my foray into docker will have to wait a bit longer.
Doesn't that require a lot of resources since you're running (mysql, nginx, etc.) numerous times (once for each container), instead of once globally?
Or, per your comment below:
You'd only have two instances of postgres, for example, one for all docker containers and one global/server-wide? Still, that doubles the resources used no?