this post was submitted on 11 Oct 2023
20 points (63.5% liked)

Programmer Humor

32479 readers
413 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 

A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar


πŸŒ† Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools.


🍺 Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-useβ€”just like my platform.

🍸 Bartender: Sure thing, Docker. One "Microservice Mojito" coming up.


πŸ₯ƒ Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you've been spinning up a lot of containers today.

🍺 Docker: Ah, Podman, the one who claims to be just like me but rootless. What'll it be?

πŸ₯ƒ Podman: I'll have what he's having but make it daemonless.


🍹 LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I've been busy setting up entire systems, right down to the init processes.

🍺 Docker: Oh, look who decided to join us. Mr. Low-Level himself!

πŸ₯ƒ Podman: You may call it low-level, but I call it flexibility, my friends.

🍸 Bartender: So, LXC, what can I get you?

🍹 LXC: Give me the strongest thing you've got. I need all the CPU shares I can get.


🍺 Docker: sips his mojito So, Podman, still trying to "replace" me?

πŸ₯ƒ Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that's all. winks

🍹 LXC: laughs While you two bicker, I've got entire Linux distributions depending on me. No time for small talk.


🍺 Docker: Ah, but that's the beauty of abstraction, my dear LXC. We get to focus on the fun parts.

πŸ₯ƒ Podman: Plus, I can run Docker containers now, so really, we're like siblings. Siblings where one doesn't need superuser permissions all the time.

🍹 LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more... weight to carry.


🍸 Bartender: Last call, folks! Anyone need a quick save and exit?

🍺 Docker: I'm good. Just gonna commit this state.

πŸ₯ƒ Podman: I'll podman checkpoint this moment; it's been fun.

🍹 LXC: Save and snapshot for me. Who knows what tomorrow's workloads will be?


And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development.

And they all knew they’d be back at it, spinning up containers, after a well-deserved system reboot.

πŸŒ™ The End.

I was bored a bit after working with podman, docker and lxc. So I asked chat gpt to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn't find anything. I really suspect it being repurposed from somewhere.

I hope you can enjoy it despite being ai generated.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.

[–] [email protected] 1 points 1 year ago (1 children)

Nice I will check it out. I currently run invokeai. I am curious about the inference speed.

[–] [email protected] 2 points 1 year ago (1 children)

If I'm being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I don't have a cluster and there is only one GPU in my server busy with image generation. I hope CPU inference is somewhat usable (74 cores) but I will have to try. If it isn't usable I can still rent GPU time from cloud providers.