Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I mean, yeah, I'd prefer ZFS but, unless I am missing something, it is a massive pain to add disks to an existing pool. You have to buy a new set of disks and create a new pool to transition from RAID z1 to z2. That's basically the only reason it fails the criteria I have. I think I'd also prefer erasure encoding instead of z2, but it seems like regular scrub operations could keep it reliable.
BTRFS sounds like it has too many footguns for me, and its raid5/6 equivalents are "not for production at this time."
Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don't create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don't even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you'd get similar redundancy to moving from RAIDz1 to RAIDz2.
Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.
You end up wasting a ton of space though because each vdev has its own parity drives.
What you lose in space, you gain in redundancy. As long as you're not looking for the absolute least redundant setup, it's not a bad tradeoff. Typically running a large stripe array with a single redundancy disk isn't a great idea. And if you're running mirrors anyway, you don't lose any additional space to redundancy.
Yep I feel this way.
No point in pricing a single HDD because I'm shooting for parity on every vdev I spin up.
Fair enough, it does add a good chunk of power usage though as HDDs are pretty power heavy at 5-7W or so.