this post was submitted on 30 Nov 2023
335 points (98.8% liked)
Technology
59123 readers
2294 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Traditionally VMs would be the use case, but these days, at least in the Linux/cloud world, it's mainly containers. Containers, and the whole ecosystem that is built around them (such as Kubernetes/OpenShift etc) simply eat up those cores, as they're designed to scale horizontally and dynamically. See: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale
Normally, you'd run a cluster of multiple servers to host such workloads, but imagine if all those resources were available on one physical hosts - it'd be a lot more effecient, since at the very least, you'd be avoiding all that network overhead and delays. Of course, you'd still have at least a two node cluster for HA, but the efficiency of a high-end node still rules.
Exactly! Imagine you have two services in a data center. If they have to communicate a lot with each other, then you would prefer them as close to each other as possible. Why? Well it's because of the difference between sending a request over a network vs. just sending it to another process on the same host. It's much more efficient in terms of latency and bandwidth. There are, of course, downsides and other other costs (like the fact that the cores that are handling the requests themselves are much less powerful), so you have to tailor your hardware allocation to your workloads. In general, if you're CPU-bound, you would want more powerful CPUs (necessitating fewer cores per host for power reasons), and if you're I/O bound, you want to reduce network latency as much as possible.
Now imagine you have thousands of services. The network I/O can get pretty extreme. Plus, occasionally, you have requirements like the fact that any data traveling from one host to another must be encrypted. So if you can keep as many services as possible on a single host, you reduce a lot of that overhead as well.
tl;dr: everything comes down to trade-offs and understanding the needs of your workloads, but in general, running 300 low power cores is probably indicative of an I/O-bound application and could hypothetically be much more efficient and cost-effective.