Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
The argument for hardware RAID has typically been about performance. But software RAID has been plenty performant for a very long time. Especially for home-use over USB...
Hardware RAID also requires you to use the same RAID controller to use your RAID. So if that card dies you likely need a replacement to use that RAID. A Linux software RAID can be mounted by any Linux system you like, so long as you get drive ordering correct.
There are two "general" categories for software RAID. The more "traditional" mdadm and filesystem raid-like things.
mdadm creates and manages the RAID in a very traditional way and provides a new filesystem agnostic block device. Typically something like /dev/md0. You can then use whatever FS you like (ext4, btrfs, zfs, or even LVM if you like).
Newer filesystems like BTRFS and ZFS implement raid-like functionality with some advantages and disadvantages. You'll want to do a bit of research here depending on the RAID level you wish to implement. BTRFS, for example, doesn't have a mature RAID5 implementation as far as I'm aware (since last I checked - double-check though).
I'd also recommend thinking a bit about how to expand your RAID later. Run out of space? You want to add drives? Replace drives? The different implementations handle this differently. mdadm has rather strict requirements that all partitions be "the same size" (though you can use a disk bigger than the others but only use part of it). I think ZFS allows for different size disks which may make increasing the size of the RAID easier as you can replace one disk at a time with a larger version pretty easily (it's possible with mdadm - but more complex).
You may also wish to add more disks in the future and not all configurations support that.
I run a RAID5 on mdadm with LVM and ext4 with no trouble. But I built my RAID when BTRFS and ZFS were a bit more experimental so I'm less sure about what they do and how stable they are. For what it's worth my server is a Dell T110 from around 12 years ago. It's a 2 core Intel G850 which isn't breaking any speed records these days. I don't notice any significant CPU usage with my setup.
I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn't match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.
ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I've only got two 20TB drives in my NAS though, so a mirror is fine.
If I were to redo things today I would probably go with ZFS as well. It seems to be pretty robust and stable. In particular the flexibility in drive sizes when doing RAID. I've been bitten with mdadm by two drives of the "same size" that were off by a few blocks...