No, but the chance of voltage drops due to the battery not being able to keep up with the load is a lot lower.
2xsaiko
Ironically, this is something that China ruined.
Plot summary:
Arkham Private Investigator Arthur Lester wakes up with no memory of who he is or what has happened, only a nameless, eerie voice guiding him through the darkness.
Blind, terrified, and confused, his journey will lead him towards a series of mysteries in the hopes of understanding the truth of what has transpired.
As cosmic horrors seep into the world around, Arthur must ask himself whether this entity truly seeks to help him, or are its intentions more…
Malevolent
It uses the podcast medium so well with the main character being blind and the resulting dialogue between him and the voice in his head that he needs to see the world around him, like the listener, and in general is incredibly well written. Harlan Guthrie is a genius.
Seconding what others have already said. You should ABSOLUTELY NOT directly back up /var/lib/postgresql if that's what you're doing right now. Instead, use pg_dump: https://www.postgresql.org/docs/current/backup-dump.html
This should also give you smaller and probably more compressible backup sizes.
I have no idea what people are talking about. My M2 MacBook with 8 GB handles pretty much all programming I do on it (biggest thing I've worked on on it was probably a 500k line C++ project). And I do use CLion usually which is one of the big IDEs. I'd go for more disk space before more RAM honestly. (Sure, my main machine has 64 GB but that's because I run huge compilation jobs testing distro packages, games, VMs, and a bunch of other stuff on it sometimes in parallel and especially the compilation jobs can easily take up 40 GB sometimes but I'd say that is not a usual use case.)
"closed by stalebot"
Since you mention nginx, I assume you're talking about proxying HTTP and not SMTP/IMAP... For that, you have the X-Forwarded-For header which is exactly for that, retaining the real source IP through a reverse proxy.
You should be able to add proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
to your location block.
Alternatively, looks like there's a Forwarded header (RFC from 2014) which I've never seen before but it seems cool: https://www.nginx.com/resources/wiki/start/topics/examples/forwarded/
I guess it comes down to what mailu supports, I have never used that.
If you are talking about SMTP and IMAP, I don't think there's a standard way to do this. You'd have to set up port forwarding on the VPS for the SMTP ports and IMAP port, and set up your home server to accept connections from any IP over the wireguard interface.
That's exceedingly horrible though and there's a better option for SMTP at least: set up an MTA (e.g. Postfix) on the VPS and have it forward mail to the real destination server. And for outgoing mail it never has to touch your home server (except your client copying it into the Sent inbox over IMAP), just send it out over the VPS directly. Or if you're using some builtin web client, I guess do set the MTA on your local server to send mail to the VPS's MTA.
No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that's what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don't have to manually allocate ports for those services that can't listen on unix sockets which always feels a bit yucky.
- A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don't.
- Want to change a dockerfile and rebuild it? If it's old and uses something like "ubuntu:latest" as a base and downloads similar "latest" binaries from somewhere, good luck getting it to build or work because "ubuntu:latest" certainly isn't the same as it was 3 years ago.
- Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you're working on is cross-platform. Lmao.) but I've had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won't run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
- If you're using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run
systemd-analyze security UNIT
.
I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you're hosting a few services, I don't think it's something you need.
If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I've seen it described it as "docker + ansible on steroids", but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won't necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.
They can claim it’s a secure protocol because they have full control over it. An application like Beeper gaining access undermines this.
Claiming their protocol is "security by obscurity" would not be the win for them you think it is.
A PTS is a single character device. Writing to it causes output to appear on the terminal buffer, reading from it reads from the input buffer. So, writing to it like you do from a separate shell effectively does the same as calling print() from python which has it as inherited stdio. There is a way to write to a PTS input buffer but it's not straightforward and works in a completely different way. Use something like tmux instead, or better, sockets.
I used the beta UI for a while and imo the new one is worse since it brings backs some of the old UI (i.e. the 1:1 port of the desktop UI for the server sidebar). What they had in the beta UI for server selection was so much better
Of course, this is what I see /s