I think you meant no data cap.
mb_
The right thing to whom? Shareholders? (=
It hasn't crashed yet, but I won't be able to test it for a couple weeks (vacation time \o/).
For normal use, it looks like the crash is resolved or, at least, as stable as 6.7 was. Next test is to play Helldivers 2 with the Vulkan backend and see if it crashes or run.
Powercolor, red devil.
Under 6.7 I was able to find some a combination that was usable for a few days.
With 6.8, timeouts would happen within 30 minutes.
I fiddles with sched_job module option and the system seems stable now.
They still have, I replaced my 3070 with a 7900 xtx and the 7900 is constantly freezing with ring GPU errors and drivers completely effing up the system. I have already replaced it twice, and I am using workarounds to not hit bugs, but they happen every few days...
I can't remember all the details, but depending on the CPU you are running you may need some extra configuration on opnsense.
There were a few issues, on my servers, running on older Intel Xeon CPUs, but I eventually fixed them adding proper flags to deal with different bugs.
Other than that, running on a VM is really handy.
It is nice that you got it running, but when everything you end up doing is running services in low ports or needing specific IP address in different networks, rootless podman is just a PITA.
In my case I have one pihole running on a docker container and another one that runs directly on a VM.
Someone said before "what's the point of running in a container"... Well, there really isn't any measurable overhead and you have the benefit of having a very portable configuration.
I do think the compromises one has to go through for podman rootless are not worth in this case, for me, not even the rootful worked properly (a few years ago), but this is a nice walkthrough for people wanting to understand more.
It is user friendly, and technically incorrect, since nothing ever lines up with reality when you use 1000 because the underlying system is base 8.
Or you get the weird non-sense all around "my computer has 18.8gb of memory"...
I can't say if you are overstating it but, only mention that I went through a similar path. I had it multiple scripts running and it was a neverending thing.
Since I have moved to small step I never had a problem.
The biggest advantage I got is for products like opnsense, you can do automatic renewal of certificates using your internal CA.
Generating new certs is still as simple (actually much easier for me) than relying on openssl or easyrsa scripts.
What? Every BIOS in the world still uses the same system. Same thing for me on Linux.
Only hard driver manufacturers used a different system to inflate their numbers and pushed a market campaign, a lot of people who didn't even use computers said "oh that makes sense - approved"
People who actually work with computer, memory, CPU, and other components in base 8 just ignores this non-sense of "x1000"
There are a few ways to do it, but you don't use caddy for SSH.
Last option is how I run my Gitea instance, authorized keys is managed by gitea so you don't really need to do anything high maintenance.
~git/.ssh/authorized_keys:
/usr/local/bin/gitea:
127.0.0.14 is the local git docker access where I expose the service, but you couldn't different ports, IPS, etc.