Selfhosted

41084 readers
344 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

Hi all,

It's been a long... years at work and my brain is fried currently with no bandwidth to properly determine how to migrate a BTFRS array from unraid over to proxmox. I can see the array in proxmox and am able to mount it but now I cannot for the life of me figure out how to

  1. verify that the data is intact
  2. assign it to a storage pool for use in vm's
  3. view it within proxmox

I haven't touched proxmox in years after settling on unraid a while back, but am looking to move back to a non-unraid config.

Anyone here have experience with btfrs and proxmox? Any good links to a tutorial or video?

Thanks!

4
 
 

cross-posted from: https://lemmy.ml/post/24823173

Hi folks, looking for a bit of steer to get off the ground with self hosting. My goals to start with are pretty straight forward:

  • I want to set up Home Assistant to move my smart devices off the cloud and fully contained within the walls of my home.
  • I want to set up my own little Pixelfed server for my family's use, along with some other federated socials.

From what I was looking at, I think my easiest route to doing both of these things is with a Home Assistant Yellow (built-in Zigbee and Thread system) with a Raspberry Pi 4.

I've never done anything like this before but I'm interested in learning. If anyone more experienced has any insight or direction, I'd really appreciate it! Cheers!

5
 
 

Hello,

I've attached a diagram of the setup I'm trying to achieve. Hopefully its clearer than trying to explain it with text...

Basically I'm trying to stream the camera to a selfhosted webpage.

The camera is connected to the VPN server

The stream is picked up on the Media Server (MediaMTX)

The stream is available from anywhere on the local network via whatever protocol MediaMTX offers. All good here.

The webserver set up is Nginx. Works fine.

A basic Wordpress site is set up and I can access it via a domain name over the internet with HTTPS.

What I'm struggling with is getting the "local stream" (read local IP) in to the website. I have WP plugins that let me embed streams, but I suspect the issue is the local IP is not available over the internet so you cant just point it to 192.X.X.X. Saying that though, even on my local network I cant see the stream.

So the questions are,

  1. how can I serve the stream to nginx/ wordpress and
  2. can I somehow have nginx treat the stream as a locally hosted resource that can proxy the stream to remote web browsers?

Ideally I dont want to open up a port on the LAN for a direct streaming to the internet which the website then points to as it seems a unsafe... But if that's the only way then I guess it can''t be helped.

Happy to provide more info if needed.

TIA

Edit: Wordpress is for a separate website project outside of the scope of this post. Only 1 page will be for the video player/ stream but there will be other uses for the website. Not just streaming

Edit 2: Seems the general consensus is that I do need to publicise my video stream.

I've just made my website accessible through its local IP and gotten embedded HLS and WebRTC streams working. Putting the domain back no longer plays the videos so its certainly a networking access issue or even a https issue as the streams are currently http.

I didn't realise you could reverse proxy a video stream! (Even though i did once upon a time use the nginx rtmp server).

I've also been made aware of tailscale + funnel which does a similar thing without exposing my own domain.

I'll have a go at reverse proxying it, which should also sort out the https issue and hopefully be done 🤞

You guys rock!

6
73
submitted 18 hours ago* (last edited 18 hours ago) by [email protected] to c/[email protected]
 
 

Today, lemmy.amxl.com suffered an outage because the rootful Lemmy podman container crashed out, and wouldn't restart.

Fixing it turned out to be more complicated than I expected, so I'm documenting the steps here in case anyone else has a similar issue with a podman container.

I tried restarting it, but got an unexpected error the internal IP address (which I hand assign to containers) was already in use, despite the fact it wasn't running.

I create my Lemmy services with podman-compose, so I deleted the Lemmy services with podman-compose down, and then re-created them with podman-compose up - that usually fixes things when they are really broken. But this time, I got a message like:

level=error msg=""IPAM error: requested ip address 172.19.10.11 is already allocated to container ID 36e1a622f261862d592b7ceb05db776051003a4422d6502ea483f275b5c390f2""

The only problem is that the referenced container actually didn't exist at all in the output of podman ps -a - in other words, podman thought the IP address was in use by a container that it didn't know anything about! The IP address has effectively been 'leaked'.

After digging into the internals, and a few false starts trying to track down where the leaked info was kept, I found it was kept in a BoltDB file at /run/containers/networks/ipam.db - that's apparently the 'IP allocation' database. Now, the good thing about /run is it is wiped on system restart - although I didn't really want to restart all my containers just to fix Lemmy.

BoltDB doesn't come with a lot of tools, but you can install a TUI editor like this: go install github.com/br0xen/boltbrowser@latest.

I made a backup of /run/containers/networks/ipam.db just in case I screwed it up.

Then I ran sudo ~/go/bin/boltbrowser /run/containers/networks/ipam.db to open the DB (this will lock the DB and stop any containers starting or otherwise changing IP statuses until you exit).

I found the networks that were impacted, and expanded the bucket (BoltDB has a hierarchy of buckets, and eventually you get key/value pairs) for those networks, and then for the CIDR ranges the leaked IP was in. In that list, I found a record with a value equal to the container that didn't actually exist. I used D to tell boltbrowser to delete that key/value pair. I also cleaned up under ids - where this time the key was the container ID that no longer existed - and repeated for both networks my container was in.

I then exited out of boltbrowser with q.

After that, I brought my Lemmy containers back up with podman-compose up -d - and everything then worked cleanly.

7
 
 

Hi, what's your setup?

I often listen to music through youtube on my phone connected to a bluetooth speaker. I use Newpipe, works very well. Then when I want to save a song or an album, there's the option for downloading (in newpipe itself) or on android for example Seal (works really well for downloading entire playlists and unselecting some sponsored video's from the playlist).

The hassle is uploading from the phone to my jellyfin. I've used File Browser, bit limited in options.

Then I thought I could use Syncthing to have some folder from my Android phone upload it automatically to my Jellyfin server (pc running dietpi), but it seems Syncthing is now discontinued on Android?

What I was first looking for was my own hosted yt-dlp with a mobile friendly UI, but that seemed quite difficult to get running.

8
 
 

Not torrenting, but searching.

I want a way to find similar media to the media I like.

Something with a similar to Jellyseer, with a way to browse media.

9
40
submitted 1 day ago* (last edited 18 hours ago) by [email protected] to c/[email protected]
 
 

I’m doing a lot of coding and what I would ideally like to have is a long context model (128k tokens) that I can use to throw in my whole codebase.

I’ve been experimenting e.g. with Claude and what usually works well is to attach e.g. the whole architecture of a CRUD app along with the most recent docs of the framework I’m using and it’s okay for menial tasks. But I am very uncomfortable sending any kind of data to these providers.

Unfortunately I don’t have a lot of space so I can’t build a proper desktop. My options are either renting out a VPS or going for something small like a MacStudio. I know speeds aren’t great, but I was wondering if using e.g. RAG for documentation could help me get decent speeds.

I’ve read that especially on larger contexts Macs become very slow. I’m not very convinced but I could get a new one probably at 50% off as a business expense, so the Apple tax isn’t as much an issue as the concern about speed.

Any ideas? Are there other mini pcs available that could have better architecture? Tried researching but couldn’t find a lot

Edit: I found some stats on GitHub on different models: https://github.com/ggerganov/llama.cpp/issues/10444

Based on that I also conclude that you’re gonna wait forever if you work with a large codebase.

10
 
 

I am currently planning to set up nextcloud as it is described in https://help.nextcloud.com/t/nextcloud-docker-compose-setup-with-caddy-2024/204846 and make it available via tailscale.

I found a tailscale reverse proxy example for the AIO Version: https://github.com/nextcloud/all-in-one/discussions/5439 which also uses caddy as reverse proxy.

It might be possible to adjust it to the nextcloud:fpm stack.

But it might also be possible to use the built in reverse proxy of the tailscale sidecar by using a TS_SERVE_CONFIG . In this json file the multiple paths (/push/* and the / root) can be configured and can redirect to the right internal dns name and port (notify_push:7867 and web:80) https://tailscale.com/blog/docker-tailscale-guide

Has anyone done that? Can someone share a complete example?

11
 
 

Let's say I've got Nextcloud selfhosted in my basement and that it is accessible on the world wide web at nextcloud.kickassdomain.org. When someone puts in that URL, we'll have all the fun DNS-lookups trying to find the IP address to get them to my router, and my router forwards ports 80 and 443 to a machine running a reverse-proxy, and the reverse-proxy then sends it to a machine-and-port that Nextcloud is listening to.

When I do this on my phone next to that computer hosting Nextcloud, (I believe) what happens is that the data leaves and re-enters my home network as my router sends the data to the IP address it is looking for (which is itself). This would mean that instead of getting a couple hundred Mbps from the local wifi (or being etherneted in and getting even more), I'm limited by my ISPs upload speed of ~25Mbps.

Maybe that just isn't the case and I've got nothing to worry about...

What I want my network to do is to know that nothing has to leave the network at all and just use the local speeds. What I tried before was using a DNS re-write in Adguard such that anything going to my kickassdomain would instead go to the local IP address (so like nextcloud.kickassdomain.org -> 192.168.0.99). This seemed to cause a lot of problems when I then left the house because, I assume, the DNS info was cached and my phone would out in the world and try to connect to that IP and fail.

My final goal here is that I want to upload/download from my selfhosted applications (like nextcloud) without being limited by the relatively slow upload speed of the ISP.

Maybe the computer already figured all this out, though - it does seem like my router should know it's own IP and not bother sending things out into the world just for them to come back.

If it matters, my IP address is pretty stable, but more importantly it is unique to me (like every house in the neighborhood has their own IP).

Updates from testing: So everything does indeed just work without me needing to change how I already had it set up, presumably because the router did the hairpin NAT action folks are talking about here.

I tested it by installed iperf3 on the server then I used my phone (using the PingTools Network Utilities android app, only found on google play and not on f-droid) to connect. Here are the results:

  1. Phone to local IP address (192.168.0.xxx) - ~700 Mbits/second
  2. Phone to speedtest.mykickassdomain.org while still on the wifi - ~700 Mbits/second
  3. Phone on cellular to speedtest.mykickassdomain.org - ~4 Mbits/second
12
24
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

I can't seem to find hardware requirements in the spec. Can someone help me out?

Looking to run this in a docker container with a Postgres DB, not sqlite.

https://github.com/laurent22/joplin/blob/dev/packages/server/README.md

13
 
 

Ive been wanting to get proper storage for my lil server running nextcloud and a couple other things, but nc is the main concern. Its currently running on an old ssd ive had laying around so i would want a more reliable longer term solution.

So thinking of a raid1 (mirror) hdd setup, with two 5400rpm 8tb drives, bringing the choices down to ironwolf or wd red plus, which both are in the same price range.

Im currently biased towards the ironwolfs because they are slightly cheaper and have a cool print on them, but from reddit threads ive seen that wd drives are generally quieter, which currently is a concern since the server is in my bedroom.

Does anyone have experience with these two drives and or know better solutions?

Oh and for the os, being a simple linux server, is it generally fine to have that on a separate drive, an ssd in this case?

Thanks! :3

14
137
submitted 3 days ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

As it stands, both Piped and Invidious are dead. Because od that, I almost completely stopped watching youtube but l'd still sometimes like to check what the people I follow posted (I used to do that via Piped). Are there any new ways of following people without actually using Google? I'm aware of the tools that download new videos as they come out but I'm more interested in just "subscribing", kinda like RSS?
Ideally it would be on iOS

Edit: I found it, “Unwatched” on iOS is awesome, thanks to [email protected]

15
 
 

cross-posted from: https://lemmy.ml/post/24722787

I am running ubuntu with casa os. I was previously running an intel apu (the name has slipped me I will update the post when I can with this info). Recently I got a 1650 that I installed for nvenc transcoding. It seems all the proper drivers are installed but my jellyfin container still fails playback anytime with it turned on.

I have reinstalled the container with the nvidia device variable and no dice. I have also tried installing the nvidia cintainer toolkit but that didn't work either. I am at a loss for trying to get nvenc to work.

Any help is appreciated!

EDIT: here is the ffmpeg log file

https://gofile.io/d/9nsBFq

16
 
 

Original Post:

I recently had a Proxmox node I was using as a NAS fail catastrophically. Not surprising as it was repurposed 12 year old desktop. I was able to salvage my data drive, but the boot drive was toast. Looks like the sata controller went out and fried the SSD I was using as the boot drive. This system was running TurnKey FileServer as a LXC with the media storage on a subvol on a ZFS storage pool.

My new system is based on OpenMediaVault and I'm am happy with it, but I'm hitting my head against a brick wall trying to get it to mount the ZFS drive from the old system. I tried installing ZFS using the instructions here as OMV is based on Debian but haven't had any luck so far.

Solved:

  1. Download and install OMV Extras
  2. OMV's web admin panel, go to System -> Plugins and install the Kernel Plugin
  3. Go to System -> Kernel and click the blue icon that says Proxmox (looks like a box with a down arrow as of Jan 2025) and install the latest Proxmox kernel from the drop down menu.
  4. Reboot
  5. Go back to the web panel, System -> Plugins and install the plugin openmediavault-zfs.
  6. Go to Storage -> zfs -> Pools and click on the blue icon Tools -> Import Pool. From here you can import all existing zfs pools or a single pool.
17
 
 

Hi all!

i have a nice setup with some containers (podman rootless) and bare metal services (anything i can install bare metal, goes bare metal usually).

I used Monit, in the past, to keep an eye on my services and automatically restart something that for any reason goes down. I stopped using Monit because doesnt scale well on mobile browser and it's frankly clumsy to configure.

I could go back to Monit i guess, but i am wondering if there is anything better out there to try.

A few requirements (not necessarily mandatory, but preferable):

  • Open Source (ideally: true open source, not just commercial sulutions with dumbed down free verisons)
  • Not limited, or focuesd, on containers (no Watchtower and similar)
  • For containers, it can just support "works" or "restart"
  • For containers, if it goes above the minimum "works" and "restart" must support podman
  • Must support bare metal services (status, start, stop)
  • Must send email or other kind of notifications (ok IM notifications, but email preferred)
  • Should additionally monitor external machines (es other servers on the LAN), or generic IP addresses
  • Should detect if a web service is alive but blocked
  • No need for fancy GUIs or a Web GUI (it's a pro point, but not required)
  • No need for data reporting, graphics and such aminities. They are a plus, but 100% not required.

What do you guys use?

18
 
 

So I have been selfhosting my calendar and todo list on a local server for sometime now. I use thuunderbird's tasks on my laptop and jtx board on my phone.

I see that jtx board has a journaling feature. It looks like maybe it is just for notes rather than a place to write self reflections. Is there something similar to this app in self hosting with a mobile and desktop component?

19
 
 

Greetings, so I final got wife permission to buy a pi zero 2 and a beeline 12s pro (n100) arriving tomorrow. I already have a nas drive for my media.

Question is what is the average setup and guides for this?

Of course I will be scouring this and other communities for info but the immediate items I want to fix are my plex/jellyfin server, setup RetroArch or equivalent gaming, then of course arr servers. But I would like to also get into reverse proxy and searxng, next cloud and pihole.

Any tips on how to make this beautiful?

OS recommendations? I currently run manjaro on my daily, but would think a kubuntu or kde fedora/debian spin might be better for these items.

Guides you can point me to? Suggestions for more or better options? There are plenty of answers in this community and I will look at what’s posted but any assistance is appreciated.

Thank you in advance.

I’m excited to start plying with the simple things

20
 
 

I am building a Proxmox server running on an SFF PC. Right now I have:

  • 1 x 250 GB Kingston A400 Sata SSD
  • 1 x 512 Gb Samsung NVMe 970 Evo Plus
  • 1 x 512 Gb Kingston NVMe KC3000
  • 1 x 12 Tb Seagate Ironwolf Re-certified disk

I plan to install Proxmox on the 250Gb Kingston disk using ext4 and use it only for Proxmox and nothing else.

I am thinking of configuring ZFS mirrored raid on the two NVMe disks. Here one disk is on my mobo, and the other is connected to the PCIe slot with an adapter, as I have only one M2 slot on the mobo. I plan to use this zpool for VMs and containers.

Finally, the re-certified 12 Tb disk is currently going through a long smarctl test to confirm that it is usable and it is supposed to be used primarily for storing media and non-critical data and VM snapshots, which I don't care much about it. I will in parallel most likely adopt the critical data to a cloud location as an additional way to protect my most important data.

My question is should I be really concerned about the lack of DRAM in the Kingston A400 SSD and its relatively low TBW endurance (85 TB) in case I would run it only to boot Proxmox from it and I think the wear out of the drive would be negligible.

  • I have the option to exchange the Proxmox boot drive with a proper SSD, like a Samsung 870 Evo (SATA SSD, using MLC NAND and having DRAM cache). I would of course need to pay around 60% more but I am just thinking that this might be an overkill.
  • Do you think that using ZFS pool for the two NVMe drives will wear them out very quickly? I will have 3-4 VMs and a bunch of containers.
  • Is the use of a slow Proxmox boot drive (SATA SSD) going to slow down the VMs and containers as they will run on much quicker NVMe SSDs, or it won't matter?
  • Shall I format the Seagate HDD in xfs to speed up the transfer of large files or shall I stick to ext4?
  • What other tests shall I run to confirm that the HDD is indeed fine and I can use it?
21
 
 

Hi all,

I'm wondering if anyone has any suggestions for a (ideally) FOSS app that can help me transfer a large amount of files between mobile devices. The exact scenario I'm trying to solve for is transferring large amount of pictures and videos from a family member's iPhone to my Android mobile phone.

I've tried a few solutions (see below list) but they all had some short coming or issue. I would ideally love something that has a mobile app that can be installed, but that's only because in my experience mobile web browsers tend to timeout / hang when dealing with a large number of file uploads at once.

  • Filerun - Filerun worked the best in my testing and, if there are no other suggestions, I'll probably return to this one, despite it not being FOSS.
  • Pingvin - Worked the next best, but would time out more frequently than Filerun. As long as I would batch the upload to only a few hundred pictures at a time and kept the screen alive, it would handle the upload.
  • PairDrop - Loved the simplicity of this web app and not having to send or deal with share links, however I was unable to get it to send uploads of more than ~100 files at a time.
  • Immich - Honestly, a perfect solution but since I'm only trying to send select pictures between devices, way overkill. Plus, family members were uncomfortable with a solution that gave the perception that it was automatically uploading ALL of their pictures to my server.

Thanks in advance for the suggestions!

22
 
 

More comprehensive show notes on the flarum forum. Enjoy this federated, self-hosted foss podcast about DIY and learning. Looking forward to expanding it to include more DIY, hardware and other sorts of projects like cooking and music. Added mixing through Stereotool, run off my old Pi.

23
 
 

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

24
 
 

Someone on Lemmy posted a phrase recently: "If you're not prepared to manage backups then you're not prepared to self host."

This seems like not only sound advice but a crucial attitude. My backup plans have been fairly sporadic as I've been entering into the world of self hosting. I'm now at a point where I have enough useful software and content that losing my hard drive would be a serious bummer. All of my most valuable content is backed up in one way or another, but it's time for me to get serious.

I'm currently running an Ubuntu Server with a number of Docker containers, and lots of audio, video, and documents. I'd like to be able to back up everything to a reliable cloud service. I currently have a subscription to proton drive, which is a nice padding to have, but which I knew from the start would not be really adequate. Especially since there is no native Linux proton drive capability.

I've read good things about iDrive, S3, and Backblaze. Which one do you use? Would you recommend it? What makes your short list? what is the best value?

25
 
 

Hey, jusy sharing Faridoon which was recently released, and just got reshared on the selfh.st podcast. You can publish your favourite quotes, and upvote them too. Great for communities looking to save some of their history.

view more: next ›