I'm going through hell, trying to update from truenas scale 24.04 to 24.10
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
What's not working? I just set up TrueNAS for the first time, went with 25.04 and figured I could just update my way out of potential bugs, but the updater is broken :D
Well, firstly I had this weird issue where the pools were giving me errors because some folder was missing, I fixed that but 24.10 has literally 0 compatibility with apps from 24.04 and it looks like I'm going to have to reset the whole pool in order to use their new apps ecosystem (because trying to install anything from 24.10 just errors out)... Which is extremely annoying as I have quite a lot of apps setup
I upgraded immich without breaking everything. That's always reason to celebrate.
How exactly does stuff get broken? Never rly had a problem bumping up the version in docker. The only issue has been the playstore version taking longer to push updates sometimes for the mobile apps.
I feel you 😂
I finally moved from reddit to Lemmy. maybe a 3-4 hour set up time to get it all working lol.
Cool! Which installation method did you use?
I did manual docker. I host some other things as well, so running it through nginx proxy manager that I already had set up.
Finally moved all my lxc onto a lower-power Xeon D host, consumes 1/3 the electricity of my previous Dell R430, same essential performance.
I plan on setting up the *arr suite and getting rid of Netflix, Crunchyroll, Amazon Prime and Disney+
You can use https://schedule.lemmings.world/ to automate the posts. Or, given the community we're in, you can selfhost it!
This week I've been doing some work on my GOG Downloader to finally back up all my GOG stuff when I buy new disks, that's pretty much it for my selfhost/homeserver stuff this week.
I didn't know that, cool! Though I should probably talk to the mods before setting up such a thing.
I'm the one who files the most bug reports on github under a different name. Our instance runs on Lemmy Schedule, so thanks!
Finally got my lemmy instance fully updated.
Been improving my backup scripts in advance of adding backup to a server.
Updated servers and other services.
I've had two failed harddrives in the last month. Not sure if bad batch or what. Thankfully the order these were on only were the two drives so may not see more. They are under warranty but it's still a pain!
Otherwise I'm enjoying Mealie lately for my recipes. Kinda nice having them all in one place but accessible by anyone in the house.
Had a hard drive fail my main zfs array. First time I have experienced a disk failure so it was a bit worrying. Thankfully I had added an additional drive to expand the array so I was able to quickly rebuild to that drive. Currently shopping for a replacement. From now on I think I will keep a cold spare just in case this happens again. I just wish hard drives would stop increasing in price.
You save some money by buying recertified drives from Serverpartdeals.
Trying to get my hands dirty with LLM, Ollama and Web Scrapping.
I don't understand most of it , but hey, that's the fun. No complaints.
Just swapped VPS hosts from ssdnodes to MassiveGRID. Got a pretty sweet deal, so I'm pretty excited.
Got my services transferred over this week and it's been fun as hell. It's interesting because I was discussing Portainer with my buddy and he has Portainer on his local PC to connect to his remote instances and with hindsight it sounds obvious of course, but it's such a nice little setup. Just finished setting up my Jellyfin reverse proxy so I'm gonna watch a movie and chill.
I used Portainer for a while and still like it for checking out networking stuff, but try out Dockge! It's more open sourcey and basic, but makes updating easier.
Since it's winter and I mostly don't want to leave my house, I busted out an unused Raspberry Pi 4b a couple weeks ago. Started with CasaOS and AdGuard. Have now added a few other services including Navidrome to serve up a lot of local-area music for myself and friends. Got a Cloudflare tunnel set up, then some authentication through CF as well. And finally secured a static IP from my ISP. This is the farthest along I've ever gotten with any of this and it's been going great. Nearly every hurdle I've encountered I've been able to work through.
Two things causing me grief today though:
-
I also have Nextcloud hosted on a VPS and I cannot get to the point of running occ commands. First it wasn't found, then no php cli, then just errors. I gave up.
-
I'm using Homer because it's just so simple, but the theming and CSS is driving me nuts. Sure, I can change colors, but will this little bar in the neon theme change from 4em to 100% for me? NOPE. Override fonts? Nosir. All good though.
I've been working on some bash scripts to help manage my media files. I've been slowly working on learning more bash and I'm pretty pleased with my progress. After I finish this bash book I'm reading (can't remember the title atm), I think I'm gonna jump into awk.
Bash is a really great shell, but consider trying out a functional shell scripting language like Elvish (which is also a shell). Syntatically it's pretty similar and not hard to pickup, but it's stupid powerful. A cool example is updating different servers via ssh in parallel using a servers.json
file;
[
{"name": "server.com", "user": "root", "identity": "~/.ssh/private_key0", "cmd": "apt update; apt upgrade -y"},
{"name": "serverb.com", "user": "root", "identity": "~/.ssh/private_key1", "cmd": "pacman -Syu"},
{"name": "serverc.com", "user": "root", "identity": "~/.ssh/private_key2", "cmd": "apk update; apk upgrade"}
]
and a little elvish magic;
var hosts = (from-json < servers.json)
peach {|h|
ssh $h[user]@$h[name] -i $h[identity] $h[cmd] > ssh-$h[name].log
} $hosts
Just run the script and boom, done. You can even swap out peach
which is parallel each
for each
if you want to do each command procedurally--but I really love using peach, especially with file operations over many different files. Linux is fast, but peach is fuckin' crazy fast. Especially for deleting files (fd -e conf -t file | peach {|x| rm $x }
, or one thing that I do is extract internal subs (so they play on my chromecast) in my Jellyfin server, using elvish makes it really fast;
fd -e mkv | peach {|x| ffmpeg -i $x -map 0:s:0 $x.srt }
Find all *.mkv
files, pass the filenames through ffmpeg (using peach) and extract the first subtitle as filename.mkv.srt
. Takes only about a few seconds to do thousands and thousands of video files. I highly recommend it for home-labbers.
Pretty dumb example, but peach
is like 6x faster;
❯ time { range 0 1000 | each {|x| touch $x.txt }}
5.2591751s
❯ time { range 0 1000 | peach {|x| touch $x.txt }}
776.2411ms
A third, and hopefully final attempt at getting an iredmail setup going. SPF, DKIM & DMARC all checking out fine. It's actually working this time. Need to get the ISP to change our PTR record though, last bit of the puzzle.
Also picked up a used negate device, so we now have pfsense fronting everything. That's allowed me to move the original router to a better location and put it in AP mode.
Emby media server moved off a Synology and into a proxmox container. Finally, we can stream high def with the hardware acceleration we weren't getting before.
ITT: lots of busted pihole v6 updates
Finally got started with Grafana, Prometheus and Meshtastic.
I feel bi-weekly is a good rhythm for this.
What does biweekly mean to you? Twice a week, or once every two weeks? If it's the latter, I prefer to use fortnightly, since it's not ambiguous.
I mean every other week. I wasn't aware of the other interpretation, but I think in combination with "The Sunday thread" it's unambiguous?
I have never heard fortnightly, but then I'm not a native speaker. Is that commonly used?
I have always heard bi-weekly be every other week, and semi-weekly be twice a week
I'm a new selfhoster and reached the limit on what my DS923+ can handle after setting up an Immich instance (on top of qbitorrent, radarr/sonarr, plex). So I picked up a mini PC this week and migrated the Immich stack over (pointing to an NFS mount for the NAS!) and now it's running super smooth 🙌 Now I'm hype to move over more services and eventually start separating out media services from mission-critical stuff like photos when I have another machine handy.
I wanted to set up local domain resolution for my devices in order to stop having to visit sites with the local 192.168.1.x IP, so I started following some guides to run dnsmasq on the mini PC (Ubuntu Server) and add entries to /etc/hosts. It was pretty easy to get working OK, but for whatever reason the DNS doesn't seem to be working on a fresh boot. My local workstation can't ping the custom DNS entries for my devices until I sudo systemctl restart dnsmasq
on the mini PC, after which everything works fine, which leads me to believe it's some weird boot order problem? I'm trying not to screw with it too much before bed, but hopefully I can figure out what's going on this week.
If you want to have domains assigned to local IP addresses, you can also use Pihole as a local DNS! It's a very nice tool for adblocking on network level anyways, can only recommend it.
Highly suggest putting Caddy on a machine, forwarding port 443 and 80 to caddy, and then letting it do your reverse-proxy stuff. Register a domain name, give it your IP address, and then tell caddy that 'immich.yourdomain.bleh' goes to port 78789 and plex goes to 'media.yourdomain.bleh' port 89898 -- Caddy handles all of the TLS stuff, handshaking, you name it - so you can have secure sites with proper certs.
Then make sure those things are isolated from your home network through vlans if your router supports it.
You can get fancier with it using a tailscale and getting some datacenter IP to forward into your network
Pihole 6 broke my DNS (dnsmasq), and since I had a fw rule in opnsense to only use pihole's DNS, and deny public DNS access, it was an early rise for me :)
Pushed Wireguard back onto my network. I've been a Tailscale user for a couple of years, but never really saw the need for it for me as I'm the only user of the service. :)
I will freely admit though, there's nothing wrong with the service and honestly is great if you are behind a CGNAT router or don't want to use Cloudflare for your tunneling.
I'm currently looking to connect an NVMe SSD to a Pi 4 I have in a differences location to finally have proper 3-2-1 backups. I'm trying to find a NVMe to USB adapter that will work though.
Personally I'm mostly involved with my homelab migration so there's not too much on the selfhosting page except os updates. I set up meshmini earlier to access my thin clients via vPro/AMT but I need to configure the clients before being able to actually using meshmini. Once I'm done with that I'll finally be able to set up Lemmy and Pine pods.
My selfhosted stuff currently works fine without me doing much which feels good and lets me focus on hardware stuff currently.
I finally got link warden up and running, but I'm chasing down some failures on a few websites.
Also realized that me biting the bullet for unlimited bandwidth (screw you Comcast!) means I can run archive team warrior, so that's been going.
I'm setting up Seafile and trying to swap everything from docker to podman. The longer term goal is that once everything is on podman, I'll get a new NVME drive and install MicroOS so I can retire my old SATA SSD (I've had it for 10 years or so, across 3 PCs).
I'm also considering setting up Forgejo and getting a worker to build my Rust projects.
Realised my jellyfin lxc had a maxed out bootdisk yesterday, haven't been using it for a while. Luckily I have decent backups setup so I was able to restore a backup from late January when it wasn't filled yet. A quick library rescan and everything was up and running again.
After having upgraded my Pi-Hole to v6, for some reason yesterday it started to not recognize any of the blocklists. So, I resetted it and now it works.
Currently doing a full backup of 37TB to tape. Which I would normally do once per quarter but I got a smart error on one of my drive that I'll have to replace but before shutting down and removing the drive I want to have a full backup I might even get warranty on the drive at least I got the last time this happened drive has lower then 200 days of runtime. We'll see
Many issues this week:
- Broke external-dns on my kube cluster because I updated my Pihole to v6
- Thinking of a way to expose a game server externally (usually used CF tunnels for specific services, but couldn’t get it to work cause it’s TCP/UDP and not HTTP traffic)
But at least i got my Velero backups working on an private S3
Immich. Wanted to exclusively use the external libraries features in read only.
Set it up once in its own Proxmox LXC under Docker. Set it up all properly started scanning my entire library. And when I woke up again it had crashed and I couldn't recover it.
Started over the following morning and only gave it access to 2024 instead of everything. And it filled up to 30gb/40gb I gave it with thumbnails and files and such. Guess it crashed the other day because it took up too much room.
Guess I'll start over again, and ensure all the config files and thumbnails are stored on my NAS so they can take up the space they need to without overloading the main (small SSD) on my server.
My pihole exploded yesterday, all my fault. A couple of years ago, I created a script called via cron to update pihole's services every other week. This was great, until now when it updated to v6 at 4am. To make matters worse, I neglected to automate raspian updates, meaning it was very out of date, and was no longer compatible with pihole-FTL (thinking back, I thought I automated it too, but I guess not).
I took an image after creating a pihole "teleporter" backup, and began formatting. In my lack of caffeine and focus, I missed that my teleporter file was corrupt after I had successfully wiped the SD card. Thankfully I had that image as I was able to mount it and retrieve my blocklists via sqlite, otherwise I would have had to start from scratch.
One good thing that came out of it (for my taste, anyway) was that I swapped the OS on the pi to fedora. No more debian around here!
Tomorrow, I plan on setting up some backup automation for my pi, as it's the only machine missing backups at this point.
Got Prometheus and Grafana setup with https on my Talos Linux cluster. Tried to use cert-manager with a DNS01 Challenge with Let's Encrypt but was using a local TLD and found out it won't issue it. So I had to switch to a local issuer. Was using metallb to gain a routable ip, I used the nginx-ingress controller for Prometheus and Grafana. Next time I can tinker I'll place the rest of my services behind it.