DeltaTangoLima

joined 1 year ago
[–] [email protected] 2 points 10 months ago (3 children)

At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.

Yeah, I know this what I should do too. As someone else said in this comment thread, gotta tear that bandaid off at some point. Just shits me that I should have to. But the freedom after doing it... <chef's kiss>

[–] [email protected] 15 points 10 months ago

If you have the means, you could self-host a Piped server? Otherwise, try out https://piped.video.

[–] [email protected] 2 points 10 months ago

Yeah, that's the other thing that shits me. Paying for my wife and I on Workspaces, and we don't have family sharing rights. We're literally paying to be treated like second-class citizens!

[–] [email protected] 2 points 10 months ago

Yep, all true. I was oversimplifying in my explanation, but you're right. There's a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

[–] [email protected] 1 points 10 months ago

Yeah, I cam across this project a few months ago, and got distracted before wrapping my head around the architecture. Another weekend project to try out!

[–] [email protected] 10 points 10 months ago (2 children)

To answer each question:

  • You can run rootless containers but, importantly, you don't need to run Docker as root. Should the unthinkable happen, and someone "breaks out" of docker jail, they'll only be running in the context of the user running the docker daemon on the physical host.
  • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
  • It's the opposite - you don't really need to care about docker networks, unless you have an explicit need to contain a given container's traffic to it's own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I've created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

It's not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

Why? I like to play.

Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let's say there's a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don't like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don't like about the new kid on the block.

[–] [email protected] 2 points 10 months ago

You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

  • switch trunk port
    • enp2s0f0 (physical)
      • vmbr1 (Linux bridge)
        • vmbr1.60 (Proxmox server interface)
        • vmbr1.100 (Proxmox VLAN interface)
          • virtual guest nic (w/ vlan tag and IP address)
        • vtnet1 (OPNsense "physical" nic, but actually virtual)
          • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

All virtual guests default route via OPNsense's IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

Like I said, it's a headfuck when you first set it up. Interface-ception.

The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I'd use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would've been overkill.

[–] [email protected] 2 points 10 months ago (1 children)

No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it'll help.

[–] [email protected] 5 points 10 months ago (7 children)

I’d avoid Google, they don’t have a stable offering

What you you mean by not stable?

I've been (stuck with) Google Workspace for many, many years - I was grandfathered out from the old G-Suite plans. The biggest issue for me is that all my Play store purchases for my Android are tied to my Workspace's identity, and there's no way to unhook that if I move.

I want to move. I have serious trust issues with Google. But I can't stop paying for Workspaces, as it means I'd lose all my Android purchases. It's Hotel fucking California.

But I've always found the email to be stable, reliable, and the spam filtering is top notch (after they acquired and rolled Postini into the service).

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago) (2 children)

Using CloudFlare and using the cloudflared tunnel service aren't necessarily the same thing.

For instance, I used cloudflared to proxy my Pihole servers' requests to CF's DNSoHTTPS servers, for maximum DNS privacy. Yes, I'm trusting CF's DNS servers, but I need to trust an upstream DNS somewhere, and it's not going to be Google's or my ISP's.

I used CloudFlare to proxy access to my private li'l Lemmy instance, as I don't want to expose the IP address I host it on. That's more about privacy than security.

For the few self-hosted services I expose on the internet (Home Assistant being a good example), I don't even both with CF at all. use Nginx Proxy Manager and Authelia, providing SSL I control, enforcing a 2FA policy I administer.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

Hmmm - not really any more. I have everything on the same VLAN, with publicly accessible services sitting behind nginx reverse proxy (using Authelia and 2FA).

The real separation I have is the separate physical interface I use for WAN connectivity to my virtualised firewall/router - OPNsense. But I could also easily achieve that with VLANs on my switch, if I only had a single interface.

The days of physical DMZs are almost gone - virtualisation has mostly superseded them. Not saying they're not still a good idea, just less of an explicit requirement nowadays.

view more: ‹ prev next ›