this post was submitted on 04 Jun 2024
98 points (97.1% liked)

Selfhosted

40006 readers
575 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn't that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What's your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 5 months ago (1 children)

Edit: 75 LXC containers, 22VMs.

That's a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?

[–] [email protected] 3 points 5 months ago

That's total draw of the whole rack. No indicative of power per vm/lxc container. If I pop onto management on a particular box it's only running at an average of 164 watts. So for all 5 processing nodes it's actually 953 watts (average over the past 7 days). So if you're wanting to quantify it that way, it's about 10W per container.

Truenas is using 420 watts (30 spinning disks, 400+TiB raw storage...closer to 350 usable. Assuming 7 watts per spinning drive were at 210Watts in disks alone, spec sheet says 5 at idle and 10 at full speed). About 70 watts per firewall. Or 1515 for all the compute itself.

The other 1000-ish watts is spent on switches, PoE (8 cameras, 2 HDHR units, time server and clock module,whatever happens to be plugged in around the house using PoE). Some power would also be lost to the UPS as well because conversions aren't perfect. Oh and the network KVM and pullout monitor/keyboard.

I think the difference here is that I'm taking my whole rack into account. Not looking at the power cost of just a server in isolation but also all the supporting stuff like networking. Max power draw on an icx7750 is 586Watts, Typical is 274 according to spec sheet. I have 2 of them trunked. Similar story with my icx7450s, 2 trunked and max power load is 935W each, but in this case specifically for PoE. Considering that I'm using a little shy of 1k on networking I have a lot of power overhead here that I'm not using. But I do have the 6x40gbps modules on the 7750.

With this setup I'm using ~50% of the memory I have available. I'm 2 node redundant, and if I was down 2 nodes I'd be at 80% capacity. Enough to add about 60GB more of services before I have to worry about shedding load if I were to see critical failures.