Shimitar

joined 1 year ago
[–] [email protected] 7 points 2 weeks ago

Nope!

Just wasted 3 days debugging an IP assigned to two devices... Not fun, don't do it...

[–] [email protected] 4 points 3 weeks ago

I think that proposing immich for every use case out there is not the correct answer.

As much as I like immich, this is not a good use case... iMHO.

[–] [email protected] 0 points 3 weeks ago (1 children)

All that? Well, I understand your point, but honestly I have more fun learning something new, and was really little work.

Anyway... Its an option too

[–] [email protected] -2 points 3 weeks ago (3 children)

No you don't need two: in fact I have only unbound setup to do everything with one piece of software.

Better or worse? No idea, but it works and its one less piece that might fail.

[–] [email protected] -1 points 3 weeks ago

I have a quite rich selfhosted stack, and DNS is indeed part of it.

For such a critical piece of infrastructure I didn't needed a container, just installed Unbound and did some setup for ad blocking and internal DNS rules.

Here my setup: https://wiki.gardiol.org/doku.php?id=router:dhcp-dns

You could go with an independent pihole maybe, but that would double the chances of a hardware failure...

Using one device for everything might seem risky, but actually has less chances of failure ;)

[–] [email protected] 2 points 4 weeks ago

That's not the point. Maybe you can, but for how long? you will never stop asking the question with docker...

[–] [email protected] 3 points 4 weeks ago (1 children)

I think you wrote it back ways: transitioned from docker to podman?

Yeah podman should use quadlets, not compose, but still works just fine with docker compose and the podman socket!

[–] [email protected] 1 points 4 weeks ago

Yes you need both 80 and 443 for certbot to work. Anyway having 80 to redirect to 443 is common and not a security risk.

[–] [email protected] 52 points 4 weeks ago (6 children)

Podman guys... Podman All the way...

[–] [email protected] 71 points 4 weeks ago* (last edited 4 weeks ago) (18 children)

There is no "write and forget" solution. There never has been.

Do you think we have ORIGINALS or Greek or roman written texts? No, we have only those that have been copied over and over in the course of the centuries. Historians knows too well. And 90% of anything ever written by humans in all history has been lost, all that was written on more durable media than ours.

The future will hold only those memories of us that our descendants will take the time to copy over and over. Nothing that we will do today to preserve our media will last 1000 years in any case.

(Will we as a specie survive 1000 more years?)

Still, it our duty to preserve for the future as much as we can. If today's historians are any guide, the most important bits will be those less valuable today: the ones nobody will care to actually preserve.

Citing Alessandro Barbero, a top notch Italian current historian, he would kill no know what a common passant had for breakfast in the tenth century. We know nothing about that, while we know a tiny little more about kings.

[–] [email protected] 6 points 1 month ago

Fellow Gentoo user! Kudos.

[–] [email protected] 1 points 1 month ago

Well, here is the relevant part then, sorry if it was not clear:

  • Jellyfin will not play well with reverse proxy auth. While the web interface can be put behind it, the API endpoints will need to be excluded from the authentication (IIRC there are some examples on the web) but the web part will stil force you to double login and canot identify the proxy auth passed down to it.
  • Jellyfin do support OIDC providers such Authelia and it's perfectly possible to link the two, in this case as i was pointing out, Jellyfin will still use it's own authentication login window and user management, so the proxy does not need to be modified.

TLDR: proxy auth doesnt work with Jellyfin, OIDC yes and it bypassess proxy, so in both cases proxy will not be involved.

 

Hi!

I have setup ScanServJS which is an awesome web page that access your scanner and let you scan and download the scanned pages from your self hosted web server. I have the scanner configured via sane locally on the server and now I can scan via web from whatever device (phone, laptop, tablet, whatever) with the same consistent web interface for everyone. No need to configure drivers anywhere else.

I want to do the same with printing. On my server, the printer is already configured using CUPS, and I can print from Linux laptops via shared cups printer. But that require a setup anyway, and while I could make it work for phones and tablets, I want to avoid that

I would like to setup a nice web page, like for the scanner, where the users no matter the device they use, can upload files and print them. Without installing nor configuring anything on their devices.

Is there anything that I can self-host to this end?

42
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Hi fellow hosters!

I do selfhost lots of stuff, starting from the classical '*Arrs all the way to SilberBullet and photos services.

I even have two ISPs at home to manage failover in case one goes down, in fact I do rely on my home services a lot specially when I am not at home.

The main server is a powerful but older laptop to which i have recently replaced the battery because of its age, but my storage is composed of two raid arrays, which are of course external jbods, and with external power supplies.

A few years ago I purchased a cheap UPS, basically this one: EPYC® TETRYS - UPS https://amzn.eu/d/iTYYNsc

Which works just fine and can sustain the two raids for long enough until any small power outage is gone.

The downside is that the battery itself degrades quickly and every one or two years top it needs to be replaced, which is not only a cost but also an inconvenience because i usually find out always the worst possible time (power outage), of course!

How do you tackle the issue in your setups?

I need to mention that I live in the countryside. Power outages are like once or twice per year, so not big deal, just annoying.

 

I have a home network with an internal DNS resolver. I have some subdomains (public) that maps to a real world IP address, and maps to the home server private address when inside home.

In short, i use unbound and have added some local-data entries so that when at home, those subdomains points to 192.168.x.y instead.

All works perfectly fine from Windows and from Linux PCs.

Android, instead, doesnt work.

With dynamic DHCP allocation on android, the names cannot be resolved (ping will fail...) from the android devices. With specific global DNS servers (like dns.adguard.com) of course will always resolve to the public IP.

The only solution i found is to disable DHCP for the Wifi on android and set a static IP with the 192.168.x.y as DNS server, in this case it will work.

But why? Aynbody has any hints?

It's like Android has some kind of DNS binding protection enabled by default, but i cannot find any information at all.

 

As the title goes, is there a way to download content from amazon prime video?

Like yt-dl or similar...

21
DNS issues (feddit.it)
submitted 3 months ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Hi! i am selfhosting my services and using a DNSMasq setup to provide ad-blocking to my home network.

I was thinkering with Unbound to add a fully independent DNS resolver and not depend on Google/Adblock/Whatever upstream DNS server but i am unable to make Unbound work.

Top Level Domains (like com, org...) are resolved fine, but anything at second level doesn't. I am using "dig" (of course i am on linux) and Unbound logging to find out what's going on, but i am at a loss.

Could be my ISP blocking my requests? If i switch back to google DNS (for example) all works fine, but using my Unbound will only resolve TLDs and some random names. For example, it will resolve google.com but not kde.org...

Edit: somehow fixed by nuking config file and starting over.

 

If I remember correctly, FitTrackee Dev do post on this community.

Well, I want to thank him/her as this is a very nice piece of software that I just started using but looks so promising and well done! A breeze to install, even on bare metal, and so well designed (even a CLI? Come on!).

Looking forward to try Garmin integration tomorrow.

Thank buddy!/Appreciated.

76
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 

Looking for a self hosted diary type of service. Where I can login and write small topics, ideas, tag them and date them. No need for public access.

Any recommendations?

Edit: anybody using monicahq or has experience with it?

Clarification: indeed I could use a general note taking app for this task. I already host and use silverbullet for general notes and such. I am looking at something more focused on daily events and connections. Like noting people met, sport activities and feedbacks, names, places... So tagging and date would be central, but as well as connections to calendar and contacts, and who knows what else... So I want to explore existing more advanced, more specialized apps.

Edit2: I ended up with BookStack. MonicaHQ seems very nice but proved unable to install using containers. It would not obey APP_URL properly and would mess up constantly HTTP / HTTPS redirection. Community was unrepsonsive and apparently github issues are ignore lately. So i ditched MonicaHQ and switched to BookStack: installed in a breeze (again container) and a very simple NGINX setup just worked. I will be testing it out now.

 

Hi, Using radicale since I switched from next cloud, using dav5x on android pretty nicely.

I was thinking about adding a web ui to access my calendars too from web... Any recommendations?

Radicale web ui only manages accounts and stuff, not the calendars contents.

 

Hi! i have a mixed set of containers (a few, not too many) and bare-metal services (quite a few) and i would like to monitor them.

I am using good old "monit" that monitors my network interfaces, filesystems status and traditional services (via pid files). It's not pretty, but get the work done. It seems i cannot find a way to have it also monitor my containers. Consider that i use podman and have a strict one service, one user policy (all containers are rootless).

I also run "netdata" but i find it overwhelming, too much data, too much graphics, just too much for my needs.

I need something that:

  • let me monitor service status
  • let me monitor containers status
  • let me restart services or containers (not mandatory, but preferred)
  • has a nice web GUI
  • the web gui is also mobile friendly (not mandatory, but appreciated)
  • Can print some history data (not manatory, but interesting)
  • Can monitor CPU usage (mandatory)
  • Can monitor filesystem usage (mandatory)

I don't care for authentication features, since it will be behind a reverse proxy with HTTPS and proxy authentication already.

I am not looking for a fancy and comples dashboard, but for something i can host on a secondary page that i open if/when i want to check stuff. Also, if the tool can be scripted or accessed via an API could be useful, so i would write some extractors to print something in a summary page in my own dashboard.

91
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

I have spent quite a lot of time trying to find the best photo management solution for my use case, and i think i have finally got a solution in mind. Please follow me and help me understanding what could be improved.

The use case: I took, over the decades, thousand of pictures with manual, film based SLR, digital DSLR and many other devices. Today i mostly only take pictures with my phone and occasionally (like 1-5 rolls per year) B/W film photos. I like to have all the pictures neatly organized per album. Albums are events, trips, occasion or just a collection of photos for any good reason together. I have always organized albums my folders and stored metadata either in the photo or in sidecar files. Over the decades i changed many management tools (the longest has been Digikam) but they all faded away for one reason or the other. I do not want to change organization since it proved solid over decades. I do not trust putting all eggs in a database or a proprietary tool format.

The needs: backup photos from family phones. Organize photos in albums (format as stated above), share & show pictures with family (maybe broader public too), archive for long term availability. Possibly small edits like rotation. Face recognition is a good plus, geographical mapping and reverse geotagging is a great plus. General object recognition could be useful but not a noticeable plus. Also i need multi-user support for family members both on backup and gallery-like browsing. My galleries need to be all shared (or better one big gallery, plus individual backups for users)

What i don't need: complex editing / elaboration (would be done offline with darktable)

Non-negotiable needs: storing photos in album-based subfolders structure with all metadata inside photos or sidecar files. No other solution will ever stand the test of time.

I tried many tools and none fits the bill. Here are my experiences:

  • Immich: by far the most polished, great for phone backup&sync, not good for album organization (photos cannot be sorted into folders, albums are logical only). Has the best face detection and reverse geocoding.
  • Photoprism: given up because i don't like open-source with money tags (devs have all the rights to ask for money, but i distrust a model where they might give up support unless they make money)
  • Librephoto: feels abandoned and UI & Face detection is subpar with immich
  • PiGallery2: blazing fast and great UI, but cannot be used for backups nor organization. But can cope well with my long lasting collections of photos.
  • Piwigo: i used this decades ago. By today standards feels ugly bloated and slow as hell. No benefits anyway for my use case that compensate slugginesh. And my server is powerfull.
  • Damselfly: great tool and super friendly dev, unfortunately i could not fit into my use case. It can work on folders, but it's actions are too limited and beside downloads and exports and tagging... not much else. Not even backups from phone. I understand it's use case is totally different from mine. Still a great piece of software.

My solution: more of the idea of how i want to proceed from here on...

Backup: keep the great Immich for phone backups. Limitations: requiring emails as user logins breaks my home server authentication scheme but i can live with it. The impossibility to organize photos in folders is a deal breaker but luckily, you can define "logical" albums and download them.

Organization: good old filesystem stuff, i don't need any specific tools. Existing photos are already sorted in subfolders, new albums can be created from Immich, downloaded, and stored on new subfolders on the server. Non-phone albums (DSLR, film cameras...) can just be added as well directly on filesystem

Viewing: PiGallery2 pointed at the subfolders, blazing fast viewing online for all family members.

Global workflow: take photos from phones, upload automatically to immich, then manually go sort them in albums, download albums and create appropriate subfolders on the server (if needed to save space, delete downloaded photos from immich). Upload/unzip and enjoy from PiGallery2. -- OR -- take photos with other cameras, scan/process on PC (darktable), create appropriate subfolders on the server, upload and enjoy from PiGallery2.

All in all what pisses me off of all this is:

  • Immich requiring a fucking email address to login (not a privacy concern here, but my users will need to remember a different login for this specific part)
  • Immich not supporting subpaths, i will need two subdomains to achieve this workflow, while just one would have been less complex for the users (something like photos.mydomain.org/gallery and photos.mydomain.org/backup, instead of photobackup.mydomain.org and photogallery.mydomain.org, you get the idea). I know all the blah blah on subdomains being better and such, i don't care, this is an usability issue for dumb users and, in general, it's the way i prefer it to be.

Of course, the best course would be to have Immich support folders (not external libraries, but actually folder based albums which is totally different approach) and it being able to move photos to folders, but hey, it wouldn't be fun in that case :)

Amy thoughts?

UPDATE: Immich storage templates seems to be the missing link. Using that properly would cut out the manual download/reupload approach. Need to experiment a bit, but looks promising.

25
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

I am setting up my notes approach which is using dedicated apps on my devices plus syncthing.

I tried lots of tools like Joplin obsidian etc but are too overkill or had something I don't like.

So I am using markor on android and another dedicated app on Linux and so on.

I would like to add also a web app to edit the MD files directly on my server when I don't have any way to install syncthing or an editor app.

The web GUI would need to list the MD files local on the server and let me edit/view/save them. Upload and download is not required as I already have that setup via filebrowser.

Any hints?

Edit: to be clear, i am not looking for an IDE or anything fancy, i only need to edit some notes online on my server. I do not want to spin containers or deploy full VS solutions just for this, all i need is a web gui editor for MD with the capability to load files on the server

Second edit: i ended up selfhosting Silverbullet.md which made my day. Exactly what i was looking for, even more than that. Thanks all!

105
Selfhost wiki (personal) (wiki.gardiol.org)
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

I have finally got my selfhost wiki up to a satisfying shape. Its here: https://wiki.gardiol.org

Take a look i hope it can help somebody.

I am open to any suggestions about it.

Note: the most original part is the one about multi-homed routing and failbacks and advanced routing.

view more: next ›