TedZanzibar

joined 1 year ago
[–] [email protected] 0 points 1 week ago (1 children)
  1. Sure but there's no reason to openly advertise that yours has open services behind it.
  2. Absolutely. There are countries that I'm never going to travel there so why would I need to allow access to my stuff from there? If you think it's nonsense then don't use it, but you do you and I'll do me.
  3. See point 3.

We all need to decide for ourselves what we're comfortable with and what we're not and then implement appropriate measures to suit. I'm not sure why you're arguing with me over how I setup my own services for my own use.

[–] [email protected] 2 points 1 week ago (1 children)

Yes and no? It's not quite as black and white as that though. Yes, they can technically decrypt anything that's been encrypted with a cert that they've issued. But they can't see through any additional encryption layers applied to that traffic (eg. encrypted password vault blobs) or see any traffic on your LAN that's not specifically passing through the tunnel to or from the outside.

Cloudflare is a massive CDN provider, trusted to do exactly this sort of thing with the private data of equally massive companies, and they're compliant with GDPR and other such regulations. Ultimately, the likelihood that they give the slightest jot about what passes through your tunnel as an individual user is minute, but whether you're comfortable with them handling your data is something only you can decide.

There's a decent question and answer about the same thing here: https://community.cloudflare.com/t/what-data-does-cloudflare-actually-see/28660

[–] [email protected] 15 points 1 week ago (7 children)

Admittedly I'm paranoid, but I'd be looking to:

  1. Isolate your personal data from any web facing servers as much as possible. I break my own rule here with Immich, but I also...
  2. Use a Cloudflare tunnel instead of opening ports on your router directly. This gets your IP address out of public record.
  3. Use Cloudflare's WAF features to limit ingress to trusted countries at a minimum.
  4. If you can get your head around it, lock things down more with features like Cloudflare device authentication.
  5. Especially if you don't do step 4: Integrate Crowdsec into your Nginx setup to block probes, known bot IPs, and common attack vectors.

All of the above is free, but past step 2 can be difficult to setup. The peace of mind once it is, however, is worth it to me.

[–] [email protected] 6 points 3 weeks ago (1 children)

I do that a lot on my phone but keep forgetting it's a thing on desktop for some reason.

[–] [email protected] 3 points 3 weeks ago

Better than using what? All I see is a bunch of stars.

[–] [email protected] 2 points 1 month ago

Yeah that's exactly what I'd done but it was insisting on trying to redirect me to the site on port 4443 for some reason.

Fixed it in the end by reverting the NPM config to default (no advanced settings) and instead using Pihole's VIRTUAL_HOST=pihole.mydomain.internal environment variable in the Docker compose file.

Cheers for your help anyway!

[–] [email protected] 1 points 1 month ago (2 children)

Just tried this myself and mine does the same thing but I don't have anything set in the custom locations tab. What did you do to resolve it?

[–] [email protected] 4 points 1 month ago (1 children)

Synology has Container Manager, which is their GUI frontend for Docker, so if it'll run in Docker it'll run on a Syno NAS. I'm running Pihole on mine just fine.

As for the M.2 drives, you can use non-Synology ones as storage. Don't quote me on it but I've a feeling it "just works" in the EU where they're not allowed to force you to use specific brands, but if it doesn't then there's a script that removes the restriction: https://github.com/007revad/Synology_enable_M2_volume

You should check their repo as they have other useful scripts. I'm using the one that enables dedupe on non-SSD volumes myself.

[–] [email protected] 1 points 1 month ago

Mind officially blown! I've just spun up a Debian KDE instance and it's running beautifully. Exactly what I wanted, thank you!

[–] [email protected] 3 points 2 months ago

Yes, big fan of XCP-ng, we use it extensively in work, but I'm not convinced it's my best option in this case.

[–] [email protected] 3 points 2 months ago (2 children)

I'm using plenty of containers, accelerated and otherwise, but I also want a full-blown desktop that I can access from wherever. Even on a wired LAN, streaming that desktop is slow and laggy when it's hosted on my NAS, which I think is due to the lack of hardware acceleration on that system. I want to move the VM to a host that has that feature (currently running Ubuntu Server) but I need a hypervisor that doesn't require its own desktop system to be installed in order to manage it.

Plenty of good replies here to help me though.

[–] [email protected] 1 points 2 months ago (8 children)

Well indeed, that's why I want to move the VM off the NAS and onto something with some hardware acceleration. Are there any remote frontend options for KVM?

 

Quick overview of my setup: Synology NAS running a whole bunch of Docker containers and a couple of full blown VMs, and an N100 based mini PC running Ubuntu Server for those containers that benefit from hardware acceleration.

On the NAS I have a Linux Mint VM that I use for various desktoppy things, but performance via RDP or NoMachine and so on is just bad. I think it's ultimately due to the lack of acceleration, so I'd like to try running it from the mini PC instead but I'm struggling to find hypervisor options.

VirtualBox can be done headless, apparently, but the package installed via Apt wants to install X/Wayland and the entire desktop experience. LXC looks like it might be a viable option with its web frontend but it appears to be conflicting with Docker atm and won't run the setup.

Another option is to redo the machine with UnRaid or TrueNAS Scale but as they're designed to be full fledged NAS OSes I don't love that idea.

So what would you do? Does anyone have a similar setup with advice?

Thanks all!

Edit: Thanks for everyone's comments. I still can't get LXC to work, which is a shame because it has a nice web frontend, so I'll give KVM a go as my next option. Failing that I might well backup my Docker volumes, blat the whole thing and see what Proxmox can do.

Edit 2: Webtop looks to be exactly what I was looking for. Thanks again for everyone's help and suggestions.

 

Specifically from the standpoint of protecting against common and not-so-common exploits.

I understand the concept of a reverse proxy and how works on the surface level, but do any of the common recommendations (npm, caddy, traefik) actually do anything worthwhile to protect against exploit probes and/or active attacks?

Npm has a "block common exploits" option but I can't find anything about what that actually does, caddy has a module to add crowdsec support which looks like it could be promising but I haven't wrapped my head around it yet, and traefik looks like a massive pain to get going in the first place!

Meanwhile Bunkerweb actually looks like it's been built with robust protections out of the box, but seems like it's just as complicated as traefik to setup, and DNS based Let's Encrypt requires a pro subscription so that's a no-go for me anyway.

Would love to hear people's thoughts on the matter and what you're doing to adequately secure your setup.

Edit: Thanks for all of your informative replies, everyone. I read them all and replied to as many as I could! In the end I've managed to get npm working with crowdsec, and once I get cloudflare to include the source IP with the requests I think I'll be happy enough with that solution.

 

I work in tech and am constantly finding solutions to problems, often on other people's tech blogs, that I think "I should write that down somewhere" and, well, I want to actually start doing that, but I don't want to pay someone else to host it.

I have a Synology NAS, a sweet domain name, and familiarity with both Docker and Cloudflare tunnels. Would I be opening myself up to a world of hurt if I hosted a publicly available website on my NAS using [insert simple blogging platform], in a Docker container and behind some sort of Cloudflare protection?

In theory that's enough levels of protection and isolation but I don't know enough about it to not be paranoid about everything getting popped and providing access to the wider NAS as a whole.

Update: Thanks for the replies, everyone, they've been really helpful and somewhat reassuring. I think I'm going to have a look at Github and Cloudflare's pages as my first port of call for my needs.

view more: next ›