If they are running on the same server as nginx, then they will need to be proxied as well.
Only 1 service can bind to a port. So if the webserver doing wordpress is bound to 80/443, nginx will not be able to acquire the port.
Hence why reverse proxying. Nginx binds 80/443, then forwards to other services on arbitrary ports
towerful
If you are forwarding to multiple services, TCP proxying isnt going to work.
The proxy server has to know where to send the connection, so it has to be protocol-aware. In this case, http/https is the protocol.
Luckily TLS/HTTPS has functionality for this without having to terminate encryption, called SNI.
Here is an article using SNI and nginx.
https://gist.github.com/kekru/c09dbab5e78bf76402966b13fa72b9d2
As has been mentioned, put the WordPress sites on different internal ports or different internal IPs (easier if they are dockerised on a docker network).
Then have nginx have the external 80/443 port binds, and reverse proxy to the WordPress instances.
This is really handy for nginx config files
https://www.digitalocean.com/community/tools/nginx
I use cloudns for nameserving. It's free
Cookie consent is actually supposed to be about all data tracking.
There are quite a few analytics that do fingerprinting "because it's not a cookie, it's not covered by Cookie Consent". But it is still covered.
Some of them respect the fact that declining cookies is about declining tracking.
So, if you consent to all cookies, you are also consenting to any fingerprinting that doesn't rely on cookies. So deleting cookies wouldn't remove that fingerprinting data.
That's a pretty broad question.
How many nodes are you running? Are you using CEPH? Or another flavour of distributed storage? Or external nas/san? Or just local arrays? Zfs? Btrfs?
What's your backup strategy? Do you use Proxmox Backup Server?
If you can figure out what you don't like about your current setup, there will probably be a tutorial or article about alternatives.
Sometimes they can be applied without having to reinstall (actually, 99% of them probably can. Sometimes I just find it easier to start from scratch tho)
It's loss.
Loss is a bizarre webcomic that massively changed in tone Vs previous comics in the series, was widely mocked and parodied by other webcomics, and became a meme.
The I II II L
represents the 4 panels of the comic.
If you have a spare computer, install proxmox on it.
There are loads of tutorials how to do this, it has a good installer, after which it's all a web based GUI.
Use it to spin up VMs to your heart's content, create scripts to automatically provision a new Ubuntu or Debian or whatever flavour. Or run up some Windows VMs. You can pass through GPUs and other devices (tho this can be difficult, again lots of tutorials out there).
Be prepared to spend some time learning proxmox. It took me 2 or 3 installs to figure out the best way to set up networks, storage etc. Mostly cause I just jumped in, found something that could be better, googled that and found a useful tutorial on it so started again.
But once proxmox is running, everything else become so much easier
What you are doing is exposing ports to the internet and advertising you home IP.
Which is fine, mostly.
A better way has been mentioned with cloudflare tunnels. This means your server connects out to cloudflares servers, instead of cloudflare forwarding to your IP.
It would then leverage all the protections that cloudflare offers (ddos protection, client filtering etc).
However, cloudflare then has access to everything that goes through it.
It's very secure, chances are cloudflare doesn't care about your traffic, and as long as everything is legal you will be fine.
If you don't want cloudflare knowing all of your servers served traffic, then you either need to run your own reverse-proxy-over-vpn style endpoint on a VPS (that you trust), or accept the additional risk of leaking your home network public IP.
If you accept the additional risk of leaking your home IP, then it's worth making sure your firewall/router is up to scratch.
Make sure it has active development (which is why most people use something like opnsense), and is always up to date (to ensure any vulnerabilities are patched).
Beyond that, the system is as secure as you want it to be. The more secure, the more maintenance and upkeep it needs.
There are many things you can do.
The easiest is to firewall on your devices. So the reverse proxy will only accept inbound from WAN, cannot establish its own connections to WAN, and whatever connections are required for forwarding to the pi.
And the pi would be firewalled to only allow incoming connections from the reverse proxy, cannot establish connections to wan, and whatever else is needed for it to function.
Where these rules are established is up to you. They could be on the router/firewall with each device on its own vlan (or pvlan if you are fancy). This is the most secure, but harder to implement.
Or they could be on the device itself, as long as the processes that are "doing the thing" cannot change the firewall rules (ie, don't run them as root or as a privileged user). Correctly configured, this is as secure as doing it on the router/firewall.
The idea here is to prevent any hacker moving sideways (ie from the device they own to another device).
If they own a device, then they can't do anything with it - even if they extract all keys, passwords and secrets from it.
Another interesting thing is to run an (outbound) proxy on your firewall. Force all outbound 80/443 through the proxy (via dnat rules), and have your servers trust the proxy's root CA.
What this means is that the firewall can then decrypt all outbound http connections, allowing for easy packet inspection.
If a server is downloading dodgy scripts, then hopefully your Deep Packet Inspection tool will catch them and shut it down.
You can do the same for DNS resolution (dnat redirect to a DNS resolving service), and whitelist only the DNS entries your services require.
Most basic attack would be easily caught it completely shut down by these kinds of things.
There is also crowdsec.
It can do a bunch of things.
It can run on things like opnsense, or the servers themselves.
It can then detect malicious traffic according to crowd sourced detection/metrics (which, I believe, you contribute to by using it) and block it.
But, honestly, might as well just use Cloudflare Tunnels.
Security is about layers.
What if someone manages to get Remote Code Execution on this service? How do I detect it? How to I prevent it? How do I limit it to that device?
And have regular offline backups. So if something gets hacked and bitlocked, you can wipe the server and restore from a backup (and practice this. No point having a backup strategy if it ends up not working when you need it).
But honestly? Do you need to expose this to the internet?
Would it be easier to run a VPN, so only trusted devices can connect to it in the first place? (Wireguard would be my recommendation here)
FML, I've had to try to color matching by eye before between different screens by the same manufacturer.
For whatever reason I wasn't provided with any calibration tools. I had some vague software tools to try and get them to align.
I spent like 8 hours trying to match these for the corporate brand colors, while still looking decent for everything else.
Shit is near impossible. If the manufacturer couldn't do it, how am I supposed to?! And with awful interfaces and no concrete way of measuring.
Like, I was taking pictures of the screens, then trying to figure out offsets and how they might relate to gamma triangles.
Client was appreciative of my (and fellow techs) efforts, but ultimately wasn't happy, and it looked shit.
That was awkward as fuck.
I would say that a homelab is more about learning, developing, breaking things.
Running esoteric protocols, strange radio/GPS setups, setting up and tearing down CI/CD pipelines, autoscalers, over-complicated networks and storage arrays.
Whereas (self)hosting is about maintaining functionality and uptime.
You could self-host with hardware at home, or on cloud infra. Ultimately it's running services yourself instead of paying someone else to do it.
I guess self-hosting is a small step away from earning money (or does earn money). Reliable uptime, regular maintenance etc.
Homelabbing is just a money sink for fun, learning and experience. Perhaps your homelab turns into self-hosting. Or perhaps part of your self-hosting infra is dedicated to a lab environment.
Homelab is as much about software as it is about hardware. Trying new filesystems, new OSs, new deployment pipelines, whatever