this post was submitted on 04 Feb 2025
77 points (97.5% liked)

Selfhosted

42631 readers
416 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I am finally making the push to self host everything I possibly can and leave as many cloud services as I can.

I have years of linux server admin experience so this is not a technical post, more of an attempt to get some crowd wisdom on a complex migration.

I have a plan and have identified services i would like to implement. Take it as given that the hardware I have can handle all this. But it is a lot so it won’t happen at once.

I would appreciate thoughts about the order in which to implement services. Install is only phase one, migration of existing data and shaking everything down to test stability is also time consuming. So any insights, especially on services that might present extra challenges when I start to add my own data, or dependencies I haven’t thought of.

The list order is not significant yet, but I would like to have an incremental plan. Those marked with * are already running and hosting my data locally with no issues.

Thanks in advance.

Base system

  • Proxmox VE 8.3
    • ZFS for a time-machine like backup to a local hdd
    • Docker VM with containers
      • Home Assistant *
      • Esphome *
      • Paperless-ngx *
      • Photo Prism
      • Firefly III
      • Jellyfin
      • Gitea
      • Authelia
      • Vaultwarden
      • Radicale
      • Prometheus
      • Grafana
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 27 points 2 weeks ago

I'd recommend migrating one service at a time (install, migrate, shake down; next service).

Either prioritize what you want declouded the most, or start with the smallest migration and snowball bigger.

[–] [email protected] 15 points 2 weeks ago (2 children)

swap Photoprism with Immich. Its a lot better imo

[–] [email protected] 2 points 2 weeks ago (3 children)

I would like to hear a bit more about the main differences. I tried immich first on a resource constrained system and it was a real pig naturally. PhotoPrism seems to be less resource intensive, but my new AMD Ryzen 7 mini pc is also a lot more powerful than a pi 4.

Im willing to go either way and this one will probably be near the bottom of the list anyway, so I have time to learn more and perhaps change my mind.

[–] [email protected] 14 points 2 weeks ago

Photoprism is less "resource intensive" because it's offloading face detection to a cloud service. There are also many who don't like the arbitrary nature of which features photoprism paywalls behind its premium version.

If you can get past immich's initial face recognition and metadata extraction jobs, it's a much more polished experience, but more importantly it aligns with your goal of getting out of the cloud.

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The main difference i would say is the development and licensing model. Photo prism is forcing ppl who want to commit to sign a CLA to.give away their rights. Also the community is not really active it is mainly one dev that can change the code license on any given time.

Immich does not have such an agreement and has a huge active contributor community around it. Also Immich is backed by Futo which has its pros and cons.

Imho the biggest pain in self hosting is when a foss product turns evil towards its community and start to practice anti consumer/free selfhosters business practices.

Immich is far less likely to turn evil.

Edit: I think it is the biggest pain cause you have to migrate every device and person to the new service.

[–] [email protected] 2 points 2 weeks ago

Good insights, thank you for the perspective. I will look into that more closely before committing.

[–] [email protected] 2 points 2 weeks ago (2 children)

Regarding mini PCs; Beware of RAM overheating!

I bought some Minisforum HM90 for Proxmox selfhosting, installed 64gb RAM (2x32gb DDR4 3200MHz sticks), ran memtest first to ensure the RAM was good, and all 3 mini PCs failed to various degrees.

The "best" would run for a couple of days and tens of passes before throwing multiple errors (tens of errors) then run for another few days without errors.

Turns out the RAM overheated. 85-95 C surface temperature. (There's almost no space or openings for air circulation on that side of the PC). Taking the lid off the PC, let 2/3 computers run memtest for a week with no errors, but one still gave the occasional error bursts. RAM surface temperature with the lid off was still 80-85 C.

Adding a small fan creating a small draft dropped the temperature to 55-60 C. I then left the computer running memtest for a few weeks while I was away, then another few weeks while busy with other stuff. It has now been 6 weeks of continuous memtest, so I'm fairly confident in the integrity of the RAM, as long as they're cold.

Turns out also some, but not all, RAM sticks have onboard temperature sensors. lm-sensors can read the RAM temperature, if the sticks have the sensor. So I'm making a Arduino solution to monitor the temperature with a IR sensor and also control an extra fan.

load more comments (2 replies)
[–] [email protected] 2 points 2 weeks ago

Are both immich and photoprism container-dependent, or just immich?

(If they fail 27002, they're a hard no for me).

[–] [email protected] 9 points 2 weeks ago (1 children)

Authelia

Think about implementing this pretty early, if your plan is to use it for your own services ( which I'd assume).

[–] [email protected] 2 points 2 weeks ago (2 children)

You are correct that I will be using it only for internal authentication. I want to get away from my bad habit of reusing passwords on internal services to reduce pwnage if mr robot gets access ;)

Any experience on how authelia interacts with vaultwarden? They seem sympatico but should I install them in tandem? Would that make anything easier?

[–] [email protected] 8 points 2 weeks ago (1 children)

No, but Vaultwarden is the one thing I don't even try to connect to authentik so a breach of the auth password won't give away everything else

[–] [email protected] 1 points 2 weeks ago (2 children)

May I ask why you'd want to selfhost bitwarden if the free hosted version is almost as good aside from the few unimportant paid perks?

[–] [email protected] 1 points 2 weeks ago (1 children)

I'm not the guy you asked, but I self-host it because I like a couple of the features (like making an org for house stuff, and sharing that with certain family members), it's really awesome for OTP as well. I honestly don't know which features are the paid ones because I went straight to Vaultwarden as I knew I wanted it in house (physically) and Bitwarden didn't offer that.

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

You can create (i think one) org under paid accounts as well and delegate specific collections access between members.
My use case is for home-stuff I want access from work (e.g. Jellyfin)

[–] [email protected] 1 points 2 weeks ago (6 children)
load more comments (6 replies)
[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

reusing passwords on internal

Please implement a password manager.

Bitwarden can do almost anything on the free tier and the few perks cost 10$ per year which arent even mandatory for actual usage.

[–] [email protected] 9 points 2 weeks ago (2 children)

This might be the last chance to migrate from Gitea to Forgejo and avoid whatever trainwreck Gitea is heading for. It's going to a hardfork soon.

[–] [email protected] 7 points 2 weeks ago* (last edited 1 week ago)

Forgejo became a hard fork about a year ago: https://forgejo.org/2024-02-forking-forward/ And it seems that migration from Gitea is only possible up to Gitea version 1.22: https://forgejo.org/2024-12-gitea-compatibility/

[–] [email protected] 2 points 2 weeks ago

That's very relevant. Thanks for the heads-up. I will look into that.

[–] [email protected] 7 points 2 weeks ago (2 children)

I would recommend running Home Assistant OS in a VM instead of using the docker container.

[–] [email protected] 2 points 2 weeks ago (2 children)
[–] [email protected] 5 points 2 weeks ago

Everything @[email protected] said and because backups to Home Assistant OS also include addons, which is just very convenient.

My Proxmox setup has 3 VMs:

  1. Home Assistant OS with all the add-ons (containers) specific to Home Assistant
  2. TrueNAS with an HBA card using PCIe passthrough
  3. VM for all other services

Also, if you ever plan to switch from a virtualized environment to bare metal servers, this layout makes switching over dead easy.

[–] [email protected] 4 points 2 weeks ago

You get easy access to their addons with a VM (aka HAOS). You can do the same thing yourself but you have to do it all (creating the containers, configuring them, figuring out how to connect them to HA/your network/etc., updating them as needed) - whereas with HAOS it generally just works. If you want that control great but go in with that understanding.

[–] [email protected] 1 points 2 weeks ago

Thanks, a solid suggestion.

I have explored that direction and would 100% agree for most home setups. I specifically need HA running in an unsupervised environment, so Add-ons are not on the table anyway. The containerized version works well for me so far and it's consistent with my overall services scheme. I am developing an integration and there's a whole other story to my setup that includes different networks and test servers for customer simulations using fresh installs of HASS OS and the like.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

The biggest thing I'm seeing here is the creation of a bottleneck for your network services, and potential for catastrophic failure. Here's where I forsee problems:

  1. Running everything from a single HDD(?) is going to throw your entire home and network into disarray if it fails. Consider at least adding a second drive for RAID1 if you can.
  2. You're going to run into I/O issues with the imbalance of the services you're cramming all together.
  3. You don't mention backups. I'd definitely work that out first. Some of these services can take their own, but what about the bulk data volumes?
  4. You don't mention the specs of the host, but I'd make sure you have swap equal to RAM here if youre not worried about disk space. This will just prevent hard kernel I/O issues or OOMkills if it comes to that.
  5. Move network services first, storage second, n2h last.
  6. Make sure to enable any hardware offloading for network if available to you.
[–] [email protected] 3 points 2 weeks ago

Proxmox backup server is my jam, great first party deduplicated incremental backups. You can also spin up more than 1 and sync between them

[–] [email protected] 1 points 2 weeks ago

I have a fine backup strategy and I don't really want to go into it here. I am considering my ecosystem of services at this point.

I am skeptical that this will overload my i/o if I build it slowly and allocate the resources properly. It may be the rate-limiting factor in some very occasional situations, but never a real over-load situation. Most of these services only sit and listen on their respective ports most of the time. Only a few do intense processing and even then only on upload of new files or when streaming.

I really resist throwing a lot of excess power at a single-user system. It goes against my whole ethos of appropriate and proportional tech.

[–] [email protected] 5 points 2 weeks ago (2 children)

Looks good, I use a lot of the stuff you plan to host.

Don't forget about enabling infrastructure. Nearly everything needs a database, so get that figured out early on. An LDAP server is also helpful, even though you can just use the file backend of Authelia. Decide if you want to enable access from outside and choose a suitable reverse proxy with a solution for certificates, if you did not already do that.

Hosting Grafana on the same host as all other services will give you no benefit if the host goes offline. If you plan to monitor that too.

I'd get the LDAP server, the database and the reverse proxy running first. Afterwards configure Authelia and and try to implement authentication for the first project. Gitea/Forgejo is a good first one, you can setup OIDC or Remote-User authentication with it. If you've got this down, the other projects are a breeze to set up.

Best of luck with your migration.

[–] [email protected] 2 points 2 weeks ago

LDAP server is also helpful, even though you can just use the file backend of Authelia.

Samba4ad was easy to set up and get replicating. Switch over soon as you can.

[–] [email protected] 1 points 2 weeks ago

Oh boy, can of worms just opened. Awesome insight. I do have an ecosystem of servers already and i have a pi zero 2 set aside to develop as a dedicated system watchdog for the whole shebang. I have multiple wifi networks segregated for testing and personal use. Use both built in wifi for the network connection and a wifi adapter to scan my sub networks.

So great insight and it helps some things click into place.

[–] [email protected] 4 points 2 weeks ago (1 children)

For Home Assistant, I use the installation script from here, it works flawlessly:

https://community-scripts.github.io/ProxmoxVE/scripts

This group took over the project after the main developer passed on, they are quite easy to install and just need you to be in the Proxmox host shell (Once you install it, you will know where it is) :)

[–] [email protected] 1 points 2 weeks ago

Also used homeassistant as an appliance. I won't bother doing that thing into docker.

[–] [email protected] 4 points 2 weeks ago (1 children)

I also started with a Docker host in Proxmox, but have since switched to k3s, as I think it has reduced maintenance (mainly through FluxCD). But this is only an option if you want to learn k8s or already have experience.

If Proxmox runs on a consumer ssd, I would keep an eye on the smart values, as it used up the disk quickly in my case. I then bought second-hand enterprise ssds and have had no more problems since then. You could also outsource the intensive write processes or use an hdd for root if possible.

I put my storage controller directly into the VM via PCI, as it makes backups via zfs easier and I was able to avoid a speed bottleneck. However, the bottleneck was mainly caused by the (virtualized) firewall and the internal communication via it. As a result, the internal services could only communicate with a little more than 1GBit/s, although they were running on ssds and nvme raids.

I use sqlite databases when I can, because the backups are much easier and the speed feels faster in most cases. However, the file should ideally be available locally for the vm.

Otherwise I would prioritize logging and alerts, but as an experienced server admin you have probably thought of that.

[–] [email protected] 1 points 2 weeks ago

Good call out on the smart values. That’s on the priority list for my monitoring scheme now too.

[–] [email protected] 4 points 2 weeks ago (1 children)

This looks exciting. I hope the transition goes well.

I would say to get automated backups running on the first few before you do them all. Thats a luxury we get "for free" with cloud services.

Note on firefly iii. I use it because I've been using it, but after using it for 4ish years, I dont really recommend it. The way I use it anyway, I think inserting data could be easier (I do it manually on purpose) and the graphs/visualizations I also wish were better. My experience with search functionality is also sub par. I would look at other alternatives as well, but I think its still better than not tracking finances at all. But I wonder if using a database client to insert data and python scripts or grafana to analyze the data would be better for me....YMMV

Good luck!

[–] [email protected] 3 points 2 weeks ago

I do have a backup plan. I will use the on-board SSD for the main system and an additional 1Tb HDD for an incremental backup of the entire system with ZFS, all to guard against garden-variety disk corruption. I also take total system copies to keep in a fire safe.

[–] [email protected] 3 points 2 weeks ago

Something I recently added that I’ve enjoyed is uptime Lima. It’s simple but versatile for monitoring and notifications.

https://github.com/louislam/uptime-kuma

[–] [email protected] 2 points 2 weeks ago (1 children)

Can’t go wrong with Proxmox. Though, I would recommend thinking about how you want to integrate Storage here. For most of the VMs you’re planning on running now that’s probably not a huge issue currently, but in the long run you’ll want to also replace Google Drive, Photos and Netflix And then you wish you’d thought about storage earlier.

Consider whether you want to deploy a separate NAS box (I can recommend TrueNAS) and do network storage in these cases (and how exactly that’s going to look like), or if you want to put it all on your Proxmox system and maybe run TrueNAS as a VM.

[–] [email protected] 1 points 2 weeks ago

My storage needs will never be as huge as some setups. The Jellyfin library will surely be the largest as I am part of a sneakernet pirate enclave (me and my friends swapping media collections as an alternative to torrents).

But the 512gb main drive of my mini PC should be plenty for the foreseeable future. That will be incrementally backed up to another internal HDD. I already snapshot my systems quarterly and keep that drive in a fire safe as a disaster recovery measure.

I may get to the point where I need a NAS, so I will look at True NAS so I can plan for that future need. My digital footprint is relatively small as I do not hoard a lot of video media. So, hooray, something else I can migrate later!

[–] [email protected] 2 points 2 weeks ago (2 children)

What hardware are you looking at?

I would do a three node cluster (maybe even 5 node)

[–] [email protected] 4 points 2 weeks ago (1 children)

That's surely overkill for my use level. Most of these services are only really listening to the web port most of the time. Yes, some like Immich or Paperless-ngx do some brief intense processing, but I am skeptical that I need nearly that much separation. I am using an AMD Ryzen 7 5825U. I am open to ideas, but I also press hard against over-investing in hardware for a single-person home setup.

[–] [email protected] 4 points 2 weeks ago

A three node cluster gives you much more flexibility and potentially uptime (assuming you do some sort of HA)

If you server has a major issue it is really nice to be able to offload to different hardware. I learned this the hard way.

[–] [email protected] 2 points 2 weeks ago (1 children)

Why clustering? What do you need HA ina home environment.
I could care less if my Jellyfin server went under for some hours of downtime due to some config change.
Will some be unhappy because my stuff isnt available? Maybe. Do I care about it? Depends on who it is.

Anyway: Way overkill outside of homelabbing and gaining experience fpr the lols.

[–] [email protected] 2 points 2 weeks ago

I don't want to spend a bunch of time troubleshooting something. Having a way to move my stuff to a different host when the host crashing is very nice.

load more comments
view more: next ›