I have a K7.
My only con with it is that the headphone jack doesnt cut the line outputs. So, i had to make an inline switch to mute my speakers for headphones only.
I am now wishing it also had XLR outputs, but Im sure i can pick up a nice transformer balancing box from somewhere.
towerful
I have a Fiio DAC and i have no complaints.
But i dont have golden ears that can hear the difference between good dacs, excellent dacs etc.
Above a certain level, its good enough for me
But not the Fremans Front of Arrakis. Bunch of splitters
I use ghcr, i have no issues pulling images from amazon ECR or wherever.
Docker got there first with the adoption and marketing.
Automation tools like ansible and terraform have existed for ages, and are great for running things without containers.
OCI just makes it a hell of a lot easier and portable
but I want to simply remind you that containers are the successor of VMs
Successor implies replacement. I think containers are another tool in the toolkit of servers/hosting, but not a replacement for VMs
IMO, Pis are for tinkering or anything that needs the GPIO.
Everything else should be some cheapo PC without the GPIO, or something embeded designed for the GPIO.
Pis are great for hobby/fun things and for prototyping.
I use proxmox to run debian VMs to run docker compose "stacks".
Some VMs are dedicated to an entire servicecs docker compose stack.
Some VMs are for a docker compose of a bunch of different services.
Some services are run across multiple nodes with HA VIPs and all that jazz for "guaranteed" uptime.
I see the guest VM as a collection, but there is only ever 1 compose file per host.
Has a bit of overhead, but makes it really easy to reason about, separate VLANs and firewall rules etc
Docker is to servers, as flatpak is to desktop apps.
I would probably run away if i saw flatpak on a headless server
Apparently a part of that is that EVs are more expensive to insurance companies, so they are spreading that cost around.
My insurance jumped by about 20% as well, after discounts from shopping around.
It cant just be EVs, but when i was searching this was the main reported factor.
Or, all the insurance companies just decided to massively bump rates
I report them as spam.
Its nothing i signed up for, and consider it marketing. And there is no unsubscribe link in the email, and its from an unmonitored inbox.
That makes it spam, and i hope it trashes their mailer IP's reputation
Anyone concerned with that threat model can host their own instance on whatever hardware they want.
They could have the middleware load balanced over aws/azure/gcp/hetzner/at-home and have load-balanced replicated postgres also running on those hosts.
They could use CDN & threat protection from those cloud providers as well as cloudflare. And really distribute the threat of that situation.
But nobody wants to fork out $$$ every month before they are even scaling to thousands of users, never mind the added complications of middleware from one provider trying to interact with a load balancer on another provider which is forwarding to postgres on a different provider, let alone geographic latencies.
Then trying to manage that, never mind the headache of an update.
But, if that is someones threat model, then they CAN work around it.
Companies owning the actual servers and infrastructure is at the level of enormous scaling (like twitter) or high risk (like banking, even then chances are they are running hardened systems that would be secure on anything).
Most companies will pass that responsibility off to a single provider, and rely on that providers skills/services for uptime
Its cheaper, has better visibility for drive health, and things like CoW means a file is extremely unlikely to be corrupt on a power failure (with hardware raid, you are relying on the battery in the raid controller for that protection. I guess you could run CoW ontop of a hardware raid). CoW also helps spread wear on SSDs.
ZFS will heal data if it finds corrupted blocks, not sure that a hardware raid does.
ZFS is the same anywhere, and is adjusted via software (as opposed to the dell PERCs which i believe require booting into essentially bios. Certainly ive never had the work through iDRAC), and you dont have to learn that raid controllers control UI (altho, they are never difficult).
Its also another part that could fail and require like-for-like replacement. ZFS on satas just needs to be able to access the drive.
I looked into it ages ago, and ZFS on HBA made so much more sense than a $300 used raid controller.