pe1uca

joined 1 year ago
[–] [email protected] 2 points 1 week ago (1 children)

Start by learning docker, you don't have to selfhost anything yet, just learn to run a container, specially to run automated stuff. Then learn to build the images and run docker compose.

Also you could start checking any form or infrastructure as code. I usually hear about ansible and nixos.
This helps having a way to redeploy your services in any hardware easily.

[–] [email protected] 6 points 1 week ago (1 children)

Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?

[–] [email protected] 7 points 3 weeks ago (1 children)

Weird, it didn't ask using firefox and ublock origin.
I don't have all lists active tho.

[–] [email protected] 1 points 1 month ago (1 children)

IIRC they mentioned is next to impossible without actually processing the video and guessing when then ad stops on your client (since the ads will change per user, so it can't be done on a server for all users)

[–] [email protected] 13 points 1 month ago* (last edited 1 month ago) (3 children)

Yes, most podcasts are hosted outside of your podcast player and distributed via RSS (even if this is Spotify which already hosts music).
So when a service has the podcast it means it lists the response from the RSS feed, but usually they just copy the text data, including the URL where the actual audio is stored.
This audio is served by whatever other service the creator of the podcast uses, which means you're a free user to that service even if you pay for Spotify, which means the wonderful benefit of ads.

And these are ads you can't block since they're included in the audio stream (yay! /s).
Podverse (the player I use) mentions this as an issue when creating clips of the podcasts because they can't know how much the timestamp has been offset by those ads, so your clip probably only sounds good to you.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago)

I use rclone and duplicati depending on the needs of the backup.

For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.

rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.

[–] [email protected] 2 points 1 month ago (1 children)

I can't give you the technical explanation, but it works.
My Caddyfile only something like this

@forgejo host forgejo.pe1uca
handle @forgejo {
	reverse_proxy :8000
}

and everything else has worked properly cloning via ssh with [email protected]:pe1uca/my_repo.git

My guess is git only needs the host to resolve the IP and then connects to the port directly.

[–] [email protected] 4 points 2 months ago (2 children)

I'm not saying to delete, I'm saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there's no actual data loss.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

[–] [email protected] 19 points 2 months ago (1 children)

Well, seems they already had the vaping sensors implemented and they're just announcing the notifications implementation... How hard is to just build am android app that displays a list and a popup?

[–] [email protected] 1 points 2 months ago (1 children)

Nice, that's mostly what I need!
The only thing missing now are the parameters needed to launch with the correct workout

[–] [email protected] 3 points 2 months ago

I don't know why it needs internet access and why the first thing it tries to do is connect to an IP TV site, and because it can't it crashes.
Can't trust that, hehe

 

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

[–] [email protected] 9 points 3 months ago (2 children)

Text to speech is what piper is doing.
What I'm looking for is called voice changer since I want to change a voice which already read something.

That's exactly what I want: "the thing in the Darth Vader halloween masks" but for linux, preferably via CLI to ingest audio files and be able to configure it to change the voice as I want, not only Darth Vader.

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

 

I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.

Shared storage

This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.

setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./
setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./
setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

Then on the server which will use the data I just mounted it

remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

And included the mount as the caddy storage

{
	storage file_system /path/to/local/storage
}

On demand

This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.

@certificate host cert.localhost
handle @certificate {
	@blocked not remote_ip <requester_ip>
	respond @blocked "Denied" 403

	@ask {
		path /ask*
		query domain=my.domain domain=jellyfin.my.domain
	}
	respond @ask "" 200

	@askDenied `path('/ask*')`
	respond @askDenied "" 404

	root * /path/to/certs
	@crt {
		path /cert.crt
	}
	handle @crt {
		rewrite * /wildcard_.my.domain.crt
		file_server
	}

	@key {
		path /cert.key
	}
	handle @key {
		rewrite * /wildcard_.my.domain.key
		file_server
	}
}

Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.

package main

import (
	"io"
	"net/http"
	"strings"

	"github.com/labstack/echo/v4"
)

func main() {
	e := echo.New()

	e.GET("/ask", func(c echo.Context) error {
		if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") {
			return c.String(http.StatusOK, domain)
		}
		return c.String(http.StatusNotFound, "")
	})

	e.GET("/cert.pem", func(c echo.Context) error {
		crtResponse, err := http.Get("https://cert.localhost/cert.crt")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		crtBody, err := io.ReadAll(crtResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		defer crtResponse.Body.Close()
		keyResponse, err := http.Get("https://cert.localhost/cert.key")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		keyBody, err := io.ReadAll(keyResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}

		return c.String(http.StatusOK, string(crtBody)+string(keyBody))
	})

	e.Logger.Fatal(e.Start(":1323"))
}

And in the CaddyFile request the certificate to this service

{
	on_demand_tls {
		ask http://localhost:1323/ask
	}
}

*.my.domain {
	tls {
		get_certificate http http://localhost:1323/cert.pem
	}
}
 

What's your recommendation for a selfhosted services to stream some private videos from S3 compatible service (vultr)?

I was thinking a private peertube instance could work, but it requires the S3 files to be public and allow all origins, so I don't like that idea.

The other one was to use rclone mount to have it as another block storage, but I don't know what are the cons of this, or if it's possible to use it with this kind of services.

This won't be for my camera videos (already have immich) nor for series/movies (jellyfin). It'll be for random videos from youtube, or twitch which I want to hoard.

(Also if you have a recommendation for cheap online storage for this it'll be appreciated, Vultr's is $0.006/GB)

 

I want to have something similar to a google's nest hub to display different type of information, like weather, bus times, my own services information, photo gallery, etc.

It's not a problem if I have to manually write plugins for custom integrations.
It'll be better if it's meant to be shown in a web browser.

I remember there were some related to a screen for a digital mirror, or a kiosk screen, but I can't find a good one to selfhost and extends to my needs.

The ones I've found are focused on showing stats of deployed services and quick links to them.

 

I just attached a new volume to my vps and usually I follow the instructions provided using parted and mkfs.ext4 but I decided to try ZFS.

The guides I've found online are all very different and I'm not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint shows this

NAME     SIZE FSTYPE   TYPE MOUNTPOINT
vdb      100G          disk
└─vdb1   100G ext4     part /mnt/storage
vdc      100G          disk
├─vdc1   100G          part
└─vdc9     8M          part

You can see the type and mountpoint of the previous volume are listed, but the ZFS' ones aren't.

Still I can properly access the ZFS pool I created and I also already copied some test data.

root@vps:~/services# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local-zfs   99.5G  6.88G  92.6G        -         -     0%     6%  1.00x    ONLINE  -
root@vps:~/services# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
local-zfs   6.88G  89.5G     6.88G  /mnt/zfs

The commands I ran were these ones

parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc

Does this look good?
Should I do something else? (like writing something to fstab)

The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I'll check to update it later)

14
Custom voice input service (lemmy.pe1uca.dev)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Is there any keyboard which lets you configure the service used for voice input?
I'd like to set an URL to a selfhosted service to send my voice to be processed which then will return the transcription.

If no keyboard exists for this any app would do.
The idea is the app lets you stream the audio to the given service and will receive the response and show it for you to edit, similar on how google keyboard has the voice input.

Bonus points if it's open source :P

 

I've been creating separate accounts for some of my selfhosted services, some are to further sub-divide the data, but for sure I always have an admin account and the account I use day to day.

What's your account creation schema?
What do you think about creating multiple accounts for your selfhosted services?

 

All guides to deploy using docker mention typing your keys/credentials/secrets into the docker compose file, or use a .env or similar file, I'm wondering how secure is this and if there's a better option.

Also, this has the issue of having to get into the server to manage them, remembering which file has each credential.

Is there a selfhostable secrets manager? I've only found proprietary/paid ones for large infrastructures and I just need it for a couple of my servers/projects.

view more: next ›