Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
In my experience there are often issues with sata ssd over USB, but slower HDD seem to work fine. With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.
This only works for minor errors caused by tiny physical changes. A buggy USB drive dropping out and losing writes it claimed to have written can kill a btrfs (sometimes unfixably so) especially in a multi-device scenario.
How so if the second drive in the raid1 retains a working copy and the checksum is correct? I have had USB drives drop out on me before for longer periods and it was never a problem after reconnecting them and doing a scrub.
But of course raid is not a backup, so that is only the first line of defense against data loss 😉
The problem is on the logic level. What happens when a drive drops out but the other does not? Well, it will continue to receive writes because a setup like this is tolerant to such a fault.
Now imagine both connections are flakey and the currently available drive drops out aswell. Our setup isn't that fault tolerant, so FS goes read-only and throws IO errors on read.
But, as the sysadmin takes a look, the drive that first dropped out re-appears, so they mount the filesystem again from the other drive and continue the workload.
Now we have a split brain. The drive that dropped out first missed the changes that happened to the other drive. When the other drive comes back, they'll have diverged states. Btrfs can't merge these.
That's just one possible way this can go wrong. A simpler one I allured to is a lost write where a drive will claim to have permanently written something but if power was cut at that moment and the same sector read upon restart, it will not actually be the new data. If that happens to all copies of a metadata chunk, good bye btrfs.