DeltaTangoLima

joined 1 year ago
[–] [email protected] 263 points 10 months ago (36 children)

This is ridiculous. It is truly ridiculous. How can something that enables the user to efficiently control their AC cause “significant economic harm”???

We're discussing this over in [email protected]. This absolutely has to be about them losing access to data they can sell to 3rd parties. The hOn ToS will no doubt have a clause that enables this.

It's a dick move for sure.

[–] [email protected] 2 points 10 months ago

Yeah, it could be. I host my own instance of Piped, and it feels like pretty much the same experience as if I were browsing YT directly.

[–] [email protected] 12 points 10 months ago (7 children)

laughs in Piped

Sorry - smugness isn't nice. But I don't know why people are still trying to use Youtube directly, when there are clearly much better options.

[–] [email protected] 1 points 10 months ago

Ah - I only have the Chromecast GTVs. Good to know I don't need to pay for an upgrade then!

[–] [email protected] 1 points 10 months ago

Lol - not my first rodeo. I'm blocking dns.google as well, and I'm 99.999% certain Google won't have coded Chromecasts to use anyone else's DNS servers.

[–] [email protected] 2 points 10 months ago (4 children)

Really? I run several Chromecasts, and I block their access to all DNS services except my internal Pi-holes. They work just fine.

[–] [email protected] 1 points 10 months ago (1 children)

Just the stuff that's being accessed directly, so if anything's only going to be accessed via your Traefik server from outside, leave them where they are. That way, any compromise of your Traefik server doesn't let them move laterally within the same VLAN (your DMZ) to the real host.

[–] [email protected] 2 points 10 months ago (3 children)

Right, then you'll probably want to do something similar to what I'm planning next, which is creating a small "DMZ" VLAN, for the public facing things, and being very specific about the ACLs in/out, default deny anything else.

The few things I allow public access to are via Nginx Proxy Manager, using Authelia for SSO/2FA where applicable. I'm intending to move that container into a dedicated VLAN that only allows port 443 in from anywhere (including other VLANs), and only allows specific IP/port combinations out for the services it proxies.

I don't even intend to allow SSH in/out for that container. I can console in from the Proxmox management console if required.

[–] [email protected] 1 points 10 months ago (5 children)

What would you do, for a basic homelab setup (Nextcloud, Jellyfin, Vaultwarden and such)?

I guess my first question is are you intending to open up any of these to be externally available? Once you understand the surface area of a potential attack, you can be a lot more specific about how you protect yourself.

I have just about everything blocked off for external access, and use an always-on Wireguard VPN to access them when I'm not home. That makes my surface area a lot smaller, and easier to protect.

[–] [email protected] 1 points 10 months ago

Yeah, still got my ancient free Gmail account going. Will probably revert to that.

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago) (1 children)

VLANs are absolutely the key here. I run 4 SSIDs, each with its own VLAN. You haven't mentioned what switch hardware you're using, but I'm assuming it's VLAN-capable.

The (high-level) way I'd approach this would be to first assign a VLAN for each purpose. In your case, sounds like three VLANs for the different WLAN classes (people; IoT; guest) and at least another for infrastructure (maybe two - I have my Proxmox VMs in their own VLAN, separate to physical infra).


VLANS

Sounds like 5 VLANs. For the purposes of this, I'll assign them thusly:

  1. vlan10: people, 192.168.10.0/24
  2. vlan20: physical infrastructure, 192.168.20.0/24
  3. vlan30: Proxmox/virtual infra, 192.168.30.0/24
  4. vlan40: IoT, 192.168.40.0/24
  5. vlan50: guest, 192.168.50.0/24

That'll give you 254 usable IP addresses in each VLAN. I'm assuming that'll be enough. ;)


SWITCH

On your switch, define a couple of trunk ports tagging appropriate VLANs for their purpose:

  1. One for your Nighthawk, tagging VLANs 10, 20, 40 and 50 (don't need 30 - Proxmox/VMs don't use wireless)
  2. One for your Proxmox LAN interface, tagging all VLANs (you ultimately want to route all traffic through OPNsense)

If you had additional wired access points for your wireless network, you'd create additional trunk ports for those per item 1. If you have additional Proxmox servers in your cluster, ditto for item 2 above.


WIRELESS

I'm not that familiar with OpenWRT, but I assume you can create some sort of rules that lands clients into VLANs of your choice, and tags the traffic that way. That how it is on my Aruba APs.

For example, anything connecting to the IoT SSID would be tagged with vlan40. Guest with vlan50, and so on.


PROXMOX

  1. Create a Linux Bridge interface for the LAN interface, bridging the physical interface connected to SWITCH item 2, above
  2. Create Linux VLAN interfaces on the bridge interface, for each VLAN (per my screenshot example)

You haven't mentioned internet/WAN but, if you're going to use OPNsense as your primary firewall/router in/out of your home network, you'd also create a Linux Bridge interface to the physical interface connecting your internet


OPNSENSE

This is the headfuck stage (at least, it was for me at first). Simply put, you need to attach the Proxmox interfaces to your OPNsense VM, and create VLAN interfaces inside OPNsense, for each VLAN.

I'm not going to attempt to explain it in reduced, comment form - no way I could do it justice. This guide helped me immensely in getting mine working.


If you have any issues after attempting this, just sing out mate, and I'll try and help out. Only ask is that we try and deal with it in comment form here where practical, for when Googlers in the future land here in the Fediverse.

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago) (10 children)

It sounds like what you're looking to achieve is what's known as zero trust architecture (ZTA). The primary concept is that you never implicitly trust a particular piece of traffic, and always verify it instead.

The most common way I've seen this achieved is exactly what you're talking about - more micro-segmentation of your network.

The design principles are usually centred around what the crown jewels are in your network. For most companies applying ZTA, that's usually their data, especially customer data.

Ideally you create a segment that holds that data, but no processing/compute/applications. You can also create additional segments for more specific use cases if you like, but I've rarely seen this get beyond three primary segments: server; database; data storage (file servers, etc).

In your case, you can either create three separate VLANs on your Proxmox cluster, with your your OPNsense firewall having an interface defined in each, or use the Proxmox firewall. I'd go the former - OPNsense is a lot more capable than the Proxmox firewall, especially if you turn on intrusion detection.

I'm not using any further segmentation beyond my VMs sitting in their own VLAN from my physical, but here's a screenshot of my networking setup on Proxmox. I wrote this reply to another post here on Selfhosted, talking about how my interfaces are setup. In my case, I have OPNsense running as a VM on the same Proxmox cluster. As I said in there, it's a bit of a headfuck getting it done, but very easy to manage once setup.

BTW, ZTA isn't overkill if it's what YOU want to do.

You're teaching yourself some very valuable skills and, and you clearly have a natural talent for thinking both vertically and horizontally about your security. This shit is gold when I interview young techs. One of my favourite interview moments is when I ask about their home setups, and then get to see their passion ignite when they talk about it.

view more: ‹ prev next ›