butitsnotme

joined 1 year ago
[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)
[–] [email protected] 2 points 2 months ago

What about simply shelling out to ripgrep?

[–] [email protected] 6 points 2 months ago (5 children)

These devices have been recommended in the past, and it looks like they can run OpenWRT

https://www.amazon.com/GL-iNet-GL-SFT1200-Secure-Travel-Router/dp/B09N72FMH5

https://openwrt.org/toh/gl.inet/start

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago)

Bash and a dedicated user should work with very little effort. Basically, create a user on your VM (maybe called git), set up passwordless (and keyless) ssh for this user but force the command to be the git-shell. Next a simple bash script which iterates directories in this user’s home directory and runs git fetch —all. Set cron to run this script periodically (every hour?). To add a new repository, just ssh as your regular user and su to the git user, then clone the new repository into the home directory. To change the upstream, do the same but simply update the remote.

This could probably be packaged as a dockerfile pretty easily, if you don’t mind either needing to specify the port, or losing the machine’s port 22.

EDIT: I found this after posting, might be the easiest way to serve the repositories, in combination with the update script. There’s a bunch more info in the Git Book too, the next section covers setting up HTTP…

[–] [email protected] 5 points 3 months ago

I would probably use ntfy.sh for this purpose. It doesn’t quite meet all your requirements, but you could use a random channel name and get some amount of security…

You can self host it, or use the hosted version. (I know it’s technically not chat, but it works on a series of messages, it just happens to call them notifications.)

[–] [email protected] 2 points 3 months ago (1 children)

Yes, I have. I should probsbly test them again though, as it’s been a while, and Immich at least has had many potentially significant changes.

LVM snapshots are virtually instant, and there is no merge operation, so deleting the snapshot is also virtually instant. The way it works is by creating a new space where the difference from the main volume are written, so each time the application writes to the main volume the old block will be copied to the snapshot first. This does mean that disk performance will be somewhat lower than without snapshots, however I’ve not really noticed any practical implications. (I believe LVM typically creates my snapshots on a different physical disk from where the main volume lives though.)

You can my backup script here.

[–] [email protected] 4 points 3 months ago (3 children)

I don’t bother stopping services during backup, each service is contained to a single LVM volume, so snapshotting is exactly the same as yanking the plug. I haven’t had any issues yet, either with actual power failures or data restores.

[–] [email protected] 37 points 3 months ago (5 children)

I know it’s not ideal, but if you can afford it, you could rent a VPS in a cloud provider for a week or two, and do the download from Google Takeout on that, and then use sync or similar to copy the files to your own server.

[–] [email protected] 2 points 4 months ago

For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

If so, keep in mind that this is the same as giving root SSH access to the host machine.

As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.

[–] [email protected] 3 points 4 months ago* (last edited 4 months ago)

My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.

[–] [email protected] 2 points 5 months ago

I use the first option, but with the addition of using an LVM snapshot to guarantee that the database (or anything else in the backup) isn’t changed while taking the backup.

[–] [email protected] 1 points 5 months ago

Getting a domain name may not be enough, if you don’t have a static IP you’ll still need a DDNS service.

What do you get for the paid no-ip service? Is it just a nice subdomain? You can get a custom domain and use a CNAME record to point one or more subdomains to a free DDNS subdomain.

view more: next ›