Complaining about enshitificatiom is just enshitification of discourse.
/s
It's the latest buzzword for displeasure against big corpos. It is, unfortunately, going to get misused
Complaining about enshitificatiom is just enshitification of discourse.
/s
It's the latest buzzword for displeasure against big corpos. It is, unfortunately, going to get misused
An M.2 makes it really difficult for a kid to pop the card out, plug it into a computer and flash it.
I think RPI Foundation is still holding onto its education-targeted roots.
I think the compute models are more targeted at the industrial/commercial side of requirements.
And any homelab enthusiast would probably be better buying a cheap used/refurbished thin-client
We have gone through the exact same process!
Multiple NICs, fancy DNS, Linux not replying on the same interface.
I ended up being super lazy about it and using somewhat sensible IP addresses.
And only using 1 NIC - which also massively simplified firewall rules.
Everything turned into zone based rules (ie mgmt has access to dmz, vms, wan. VMs has access wan. DMZ has access to nothing. anything else is a specific rule).
I'm even thinking about swapping to a more zone oriented firewall solution.
However, if I were to do it again, I'd ditch the multiple vlans (well, almost. I'd have a proxmox/hardware vlan, and a VM vlan). I'd manage VM firewalls in proxmox, and network firewalls on opnsense.
Then I can be precise about who talks to who.
You have to NAT through opnsense, or set up different routing tables on the VM.
Client is 192.168.1.4.
Server is 192.168.1.5 and 192.168.2.5.
Opnsense is dealing with vlan 1 and vlan 2 (for simplicity sake) according to 192.168.VLAN.0/24, and will happily forward packets between the 2 subnets.
As the VM has 2 network devices, 1 on VLAN1 and 1 on VLAN2, it alway has a direct connection to the client via VLAN1.
So, if your client connects to 192.168.2.5, it doesn't know where to send the packet.
It sends it to the gateway (opnsense), which then forwards it to vlan2.
The VM then receives the packet, and replies to its address (192.168.1.5, opnsense doesnt alter the sender's address).
The way Linux works is it will use the network device that is in the same subnet - as opposed to replying on the same device the packet arrived on.
So the VM send the packet out VLAN1 directly back to the client.
And this works. Packets from client to server go via opnsense, packets from server to client go directly.
For a while.
Then opnsense sees that there is an ongoing connection between vlan1 and vlan2... except it's not seeing all the proper acks/syn/wait/fin packets. So it thinks it's a timed out connection, or something dodgy going on... and it closes the connection.
And now your client can't talk to the server through vlan2, and it has to reconnect.
I pulled my hair out over this.
I ended up just having a single NIC per VM.
Here's a SE question that might help you.
https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming
In proxmox, especially if you are running a bunch of services (and not virtual desktops) it much better to set up an automated way of creating a cloud-init template.
You can run the script every now and then to download an updated image, load up some sensible defaults, then create a template of the VM.
After that, you just clone the template, resize drives, tweak hardware settings, adjust any cloud-init settings, then boot the VM.
It takes a while to sort out the script, after which you get consistent up-to-date cloud-init enabled templates.
Then it's like 2 minutes to clone and configure a VM from proxmox's web-gui.
And you always get consistent ready-to-go VMs.
You can even do it via CLI, so you could ansible/terraform the whole process
I use multiple VMs, and group things either by security layer or by purpose.
When organising by purpose, I have a VM for reverse proxies. Then I have a VM for middleware/services. Another VM (or multiple) for database(s). Another VM for backend/daemon type things.
Most of them end up running docker, but still.
Lets me tightly control access between layers of the application (if the reverse proxy gets pwnd, the damage is hopefully contained there. If they get through that, the only get to the middleware. Ideally the database is well protected. Of course, none of that really matters when there's a bug in my middleware code!)
Another way to do it is by purpose.
Say you have a media server things, network management things, CCTV things, productivity apps etc.
Grouping all the media server things in a VM means your DNS or whatever doesn't die when you wiff an update to the media server. Or you don't lose your CCTV when you somehow link it's storage directory into the media server then accidentally delete it.
If that makes sense.
Another way might be by backup strategy.
A database hopefully has point in time backup/recovery systems in place. Whereas a reverse proxy is just some config (hopefully stored on GitHub) and can easily be rebuilt from scratch.
So you could also separate things by how "live" the data is, or how often something is backed up, or how often something gets reconfigured/tweaked/updated.
I use VMs to section things out accordingly.
Takes a few extra GB of storage/memory, has a minor performance impact. But it limits the amount of damage my dumb ass can do.
I can understand where the newspapers are coming from. At lot of mobile apps do this, ads vs paid versions.
But an ad companys product is not to the end user, and often their interests are at odds to the end users privacy.
They want to show ads to people where they are most effective. They want to prove they have shown the ads, and they want to prove that the user has been influenced by the ad.
All of this needs ridiculous tracking to support their business model.
It's the ad companies at fault.
If you decline consent to an ad company, then they should show you generic adverts.
If a website requires ads vs subscription, then accepting data processing consent should not be part of the contract.
So, as long as the websites give you the option to decline data processing from the ad company without affecting your ability to use the website, then it's fine.
I think the fingers are through the holes, it's just a stupid perspective
God, let's hope nobody ever tries that. Higher prices because you don't consent to more invasive tracking, because it poses a higher fraud risk to the company.
Thankfully, processing the same data for fraud prevention should be a different consent process/option than processing it for targeted advertising.
That's kinda the point.
Any server you connect to knows your IP address. As does any equipment between your home network and the remote server. It has to, that's how networks work.
Processing that to ensure your IP isn't abusing their servers is legitimate interest.
Processing that along with your interactions with their website likely isn't legitimate interest, so has to get consent (as this is likely profiling or user tracking, regardless of cookies used)
You could argue that it is legitimate interest, but then you have to back it up in your privacy policy as to why it is required, and it could be easily challenged as it's such a broad and subjective term (whether that challenge goes anywhere is up to enforcing bodies, like the EU/ICO/whatever).
The idea is that the barrier of entry for "legitimate interest" is high enough and that abusing legitimate interest carries a risk, so that it isn't the default.
Just because you have access to the data, doesn't mean you can use it however you want.
The issue with that is that there are so many different apps that process data in so many different ways.
A phone has a bunch of physical features. Letting a website/app know what's available and request access is a small extension of the hardware APIs with clear defined purposes.
But a financial app is going to have widely different data interests and processing than a workout app, which will be different from a video game, a calculator, a forum etc.
I don't know how it can be normalised into something programmatic.
I guess it's why law and courts are so complex. Sure, laws are written down, it should be easy... but they are regularly challenged and tested.
It's a difficult problem to solve.
The ideal way would be to cut the legalese bullshit in the privacy policy.
However, that's a legal document, so it needs the legalese.
It actually needs an honest human readable summary that sums up what's collected, why it's used etc.
Fraud prevention is a legitimate interest and does not need a consent request.
I'm pretty sure that is specifically called out in GDPR. Certainly ICO (UK) has loads of articles on it.
However legitimate interests are often difficult to demonstrate compliance, so it can be easier to rely on consent.
Tbh, POE isn't a feature most people need. And it's quite expensive, takes up a lot of room, and generates quite a bit of heat.
You can get inline POE extractors that spit out 5v usb/jack or 12 jack. I use them quite a lot, and they are much cheaper than PoE hats