theterrasque

joined 1 year ago
[–] [email protected] 5 points 11 months ago (8 children)

Nine. How much ram do they use? How much disk space? Try running 90, or 900. Currently, on my personal hobby kubernetes cluster, there's 83 different instances running. Because of the low overhead, I can run even small tools in their own container, completely separate from the rest. If I run say.. a postgresql server.. spinning one up takes 90mb disk space for the image, and about 15 mb ram.

I worked at a company that did - among other things - hosting, and was using VM's for easier management and separation between customers. I wasn't directly involved in that part day to day, but was friend with the main guy there. It was tough to manage. He was experimenting with automatic creating and setting up new VM's, stripping them for unused services and files, and having different sub-scripts for different services. This was way before docker, but already then admins were looking in that direction.

So aschually, docker is kinda made for people who runs things in VM's, because that is exactly what they were looking for and duct taping things together for before docker came along.

[–] [email protected] 1 points 11 months ago

Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.

https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ has some info about it.

[–] [email protected] 3 points 11 months ago (10 children)

VM's have much bigger overhead, for one. And VM's are less reproducible too. If you had to set up a VM again, do you have all the steps written down? Every single step? Including that small "oh right" thing you always forget? A Dockerfile is basically just a list of those steps, written in a way a computer can follow. And every time you build an image in docker, it just plays that list and gives you the resulting file system ready to run.

It's incredibly practical in some cases, let's say you want to try a different library or upgrade a component to a newer version. With VM's you could do it live, but you risk not being able to go back. You could make a copy or make a checkpoint, but that's rather resource intensive. With docker you just change the Dockerfile slightly and build a new image.

The resulting image is also immutable, which means that if you restart the docker container, it's like reverting to first VM checkpoint after finished install, throwing out any cruft that have gathered. You can exempt specific file and folders from this, if needed. So every cruft and change that have happened gets thrown out except the data folder(s) for the program.

[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (2 children)

Modularity, compartmentalization, reliability, predictability.

One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service's install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it's own user. Since I don't trust it with that, let it just have it's own database server running in docker and good riddance.

And so on and so forth.. with docker not only is all this specified in excruciating details, it's also the exact same setup on every install.

You don't have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn't change anything, but somehow causes the program to segfault.

I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don't even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it's running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.

If you're an old Linux admin... This is what utopia looks like.

Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)

[–] [email protected] 6 points 11 months ago (1 children)

First language in Accept-Language header that server also support

[–] [email protected] 6 points 11 months ago

That's in separate headers

[–] [email protected] 2 points 11 months ago

In other news, they also regulated that knives must be designed to prevent stabbing people, and guns must be designed to only shoot bad guys.

[–] [email protected] 2 points 11 months ago

And what do you think CD writers are? I'm not talking about rewriteable CDs here. Normal burn once CDs. You could write some files, then decide to replace a file and add more.

Look up cd sessions. Until you finalized it, and as long as there was still free space, you could add, modify and delete data on it.

[–] [email protected] 2 points 11 months ago (2 children)

You had tricks on cd's and such to make it kinda work as read/write storage.

[–] [email protected] 4 points 11 months ago

Or "for your security"

[–] [email protected] 8 points 11 months ago

Or they call tech support and say their computer doesn't work anymore

view more: ‹ prev next ›