Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I’m interested in how you like Ceph.
My setup is similar, using a DS1522+ volume as shared block storage for an iSCSI SAN for three Proxmox nodes. Two nodes are micro PCs and the third is running on the 1522+. There’s a DS216j for backups.
Ceph is... fine. I feel like I don't know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it's storage and network. And yet, it has perpetually complained about too many placement groups.
Aside from that and the occasional falling over of monitors it's been relatively quiet? I'm tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I'm using bonded USB 1GbE copper and it's kind of sketchy.
I maintained a CEPH cluster a few years back. I can verify that speeds under 10GbE will cause a lot of weird issues. Ideally, you'll even want a dedicated 10GbE purely for CEPH to do its automatic maintenance stuff and not impact storage clients.
The PGs is a separate issue. Each PG is like a disk partition. There's some funky math and guidelines to calculate the ideal number for each pool, based upon disks, OSDs, capacity, replicas, etc. Basically, more PGs means that there are more (but smaller) places for CEPH to store data. This means that balancing over a larger number of nodes and drives is easier. It also means that there's more metadata to track. So, really, it's a bit of a balancing act.