sudneo

joined 1 year ago
[–] [email protected] 1 points 7 months ago

I personally like a lot the gazillion bangs also available, the personal up/downranking/blocking of websites and their quick answer is often fairly good (I mostly use it for documentation lookup). The lenses are definitely the best feature though, especially coupled with bangs. I converted even my wife who really loves it.

[–] [email protected] 2 points 7 months ago

Thanks (grazie?)! I was looking for something similar and kanidm looks great feature wise and simple to deploy!

[–] [email protected] 2 points 7 months ago (1 children)

I struggled with this for a long time, and then I just decided to use synology photos.

It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it's on my NAS and I back that data up again.

I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.

I didn't try PiGallery2 though, maybe I will have a look!

[–] [email protected] 4 points 8 months ago

Did it sound cold? Because I didn't mean that, I just meant to actually answer the question from my PoV. Just for the record, I also did not down vote you.

So yeah, use whatever footgun you prefer, I don't judge :)

[–] [email protected] 3 points 8 months ago

Or rustic! It is compatible with restic but has some nice additions, for example the fact that supports a config files. It makes operations a bit easier IMHO (I am currently using both).

[–] [email protected] 1 points 8 months ago

I really thought swarm was dead :)

To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!

Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago)

Yeah ultimately every container has it's own veth interface, so you can do shaping using tc on those.

Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.

That's a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.

[–] [email protected] 4 points 8 months ago (2 children)

Cgroups have the ability to limit TCP and total network bandwidth. I don't know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.

You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.

I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.

[–] [email protected] 2 points 8 months ago (4 children)

I think k8s is a different beast, that requires way more domain specific knowledge besides server/Linux basic administration. I do run it, but it's an evolution of a need, specifically when you want to manage a fleet of machines running containers.

[–] [email protected] 2 points 8 months ago (2 children)

Because the lxc way is inherently different from the docker/podman way. It's aimed at running full systems, rather than mono process containers. It has it's use cases, but they are not as common IMHO.

[–] [email protected] 6 points 8 months ago (1 children)

You have a bunch of options:

kubectl run $NAME --image=$IMAGE

this just creates a pod running the specific image. If you kill the pod, or it terminates, it won't be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:

kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
# edit mypod.yaml
kubectl create -f mypod.yaml

You can do the same with a deployment or statefulset:

kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml

In case you don't need anything fancy, the kubectl create subcommand allows you to create simple workload, so probably that's the answer to your question.

view more: next ›