There's a transaction fee, the higher you pay the more priority you have (since miners get a cut).
IAm_A_Complete_Idiot
It's not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.
Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.
Not in this one, iirc they actually reverse engineered and were working off of apple libraries, rather than proxies.
In which case the -a isn't needed.
Better have not created any new files tho - git commit -a doesn't catch those without an add first.
As a Linux user (and ex arch user btw), I'm deeply offended.
It looks like on blender's website there's 6 entities on there, and one of them does seem to be an individual fwiw. Here's his website: https://aras-p.info/.
The rest all seem to be corporations though - meta, aws, some game company I've never heard of, AMD, and epic.
I just checked their financial report for 2022 and it looks like 50% came from patron funding (which looks like entirely companies like Google), 5% from epics grant, and then 10% corporate membership. 20% came from individuals, and the rest from random other miscellaneous things like the blender market. If you search blender foundation annual report 2022, the finances breakdown will be near the end of the slides.
Wikimedia foundation is, none of the other things I listed are.
I think the key there is funding from big companies. There's tons of standards and the like in which big companies take part - both in terms of code and financial support. Big projects like the rust compiler, the Linux kernel, blender, etc. all seem to have a lot of code and money coming in from big companies. Sadly there's only so much you can get from individuals - pretty much the only success story I know of is the wikimedia foundation.
The point is to minimize privilege to the least possible - not to make it impossible to create higher privileged containers. If a container doesn't need to get direct raw hardware access, manage low ports on the host network, etc. then why should I give it root and let it be able to do those things? Mapping it to a user, controlling what resources it has access to, and restricting it's capabilities means that in the event that my container gets compromised, my entire host isn't necessarily screwed.
We're not saying "sudo shouldn't be able to run as root" but that "by default things shouldn't be run with sudo - and you need a compelling reason to swap over when you do"
For context for other readers: this is referring to NAT64. NAT64 maps the entire IPv4 address space to an IPv6 subnet (typically 64:ff9b). The router (which has an IPv4 address) drops the IPv6 prefix and does a normal IPv4 NAT from there. After that, you forward back the response over v6.
This lets IPv6 hosts reach the IPv4 internet, and let you run v6 only internally (unlike dual stack which requires all hosts having v6 and v4).