PlutoniumAcid

joined 1 year ago
[–] [email protected] 2 points 1 week ago (1 children)

That sounds awfully complicated for home use.

[–] [email protected] 85 points 1 week ago (7 children)

Zero trust, but you have to use Amazon AWS, Cloudflare, and make your own Telegram bot? And have the domain itself managed by Cloudflare.

Sounds like a lot of trust right there... Would love to be proven wrong.

[–] [email protected] 2 points 2 weeks ago

Barbarian planets are called meteors.

[–] [email protected] 1 points 3 weeks ago
[–] [email protected] 4 points 1 month ago

Yes, you are right of course. It's a sad state of affairs.

[–] [email protected] 41 points 1 month ago (4 children)

It boggles my mind that people fall for these scams, and do so in such large numbers.

So much stupidity and/or so little tech literacy. Ow.

And it's depressing that there are so many sleazy people out there doing all kinds of bad things in general. Shame.

[–] [email protected] 15 points 1 month ago (1 children)

Will existing devices continue to work "forever" or must we add them to the graveyard?

[–] [email protected] 2 points 1 month ago

The apps of the three big European banks I have banked with were able to detect magisk and refused even when on the whitelist.

[–] [email protected] 2 points 1 month ago (3 children)

Didn't work for me on a Samsung S6 or S10. Maybe I will try again some day but for now it's not worth the risk of never being able to go back, thanks to the Samsung physical one-time fuse.

My next phone should be a Pixel with Graphene...

[–] [email protected] 0 points 1 month ago (2 children)

Oh right, I completely forgot about the separate device that you have to plug into your computer and then also plug your card into the deviceand then enter your pin. It's almost as convenient as having the phone app!

[–] [email protected] 10 points 1 month ago (18 children)

Eupean banking apps refuse to launch on unlocked phones. And you need said banking apps as mandatory 2fa to log into your online banking system.

So in EU you gotta choose between banking and rooting.

22
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

I run an old desktop mainboard as my homelab server. It runs Ubuntu smoothly at loads between 0.2 and 3 (whatever unit that is).

Problem:
Occasionally, the CPU load skyrockets above 400 (yes really), making the machine totally unresponsive. The only solution is the reset button.

Solution:

  • I haven't found what the cause might be, but I think that a reboot every few days would prevent it from ever happening. That could be done easily with a crontab line.
  • alternatively, I would like to have some dead-simple script running in the background that simply looks at the CPU load and executes a reboot when the load climbs over a given threshold.

--> How could such a cpu-load-triggered reboot be implemented?


edit: I asked ChatGPT to help me create a script that is started by crontab every X minutes. The script has a kill-threshold that does a kill-9 on the top process, and a higher reboot-threshold that ... reboots the machine. before doing either, or none of these, it will write a log line. I hope this will keep my system running, and I will review the log file to see how it fares. Or, it might inexplicable break my system. Fun!

37
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

TLDR: VPN-newbie wants to learn how to set up and use VPN.

What I have:

Currently, many of my selfhosted services are publicly available via my domain name. I am aware that it is safer to keep things closed, and use VPN to access -- but I don't know how that works.

  • domain name mapped via Cloudflare > static WAN IP > ISP modem > Ubiquity USG3 gateway > Linux server and Raspberry Pi.
  • 80,443 fowarded to Nginx Proxy Manager; everything else closed.
  • Linux server running Docker and several containers: NPM, Portainer, Paperless, Gitea, Mattermost, Immich, etc.
  • Raspberry Pi running Pi-hole as DNS server for LAN clients.
  • Synology NAS as network storage.

What I want:

  • access services from WAN via Android phone.
  • access services from WAN via laptop.
  • maybe still keep some things public?
  • noob-friendly solution: needs to be easy to "grok" and easy to maintain when services change.
 

I am looking for an action cam. It does not need to be a GoPro or DJI simply because they are so very expensive -- but finding alternatives is difficult, mostly because all those products are misrepresented on sites like Amazon. Some reviews reveal that the manufacturer offers free add-ons to customers who post 5-star reviews. That means I cannot trust any review at all.

Where can I find honest reviews? How can I choose a decent action cam without getting scammed?

 

TLDR:

  • Update: the server software has a bug about generating+saving certificates. Bug has been reported; as a workaround I added the local IP to my local 'hosts' file so I can continue (but that does not solve it of course).
  • I suspect there's a problem with running two servers off the same IP address, each with their own DNS name?

Problem:

  • When I enter https://my.domain.abc into Firefox, I get an error ERR_SSL_UNRECOGNIZED_NAME_ALERT instead of seeing the site.

Context:

  • I have a static public IP address, and a Unifi gateway that directs the ports 80,443 to my server at 192.168.1.10 where Nginx Proxy Manager is running as a Docker container. This also gives me a _Let's Encrypt certificate.
  • I use Cloudflare and have a domain foo.abc pointed to my static public IP address. This domain works, and also a number of subdomains with various Docker services.
  • I have now set up a second server running yunohost. I can access this on my local LAN at https://192.168.1.14.
  • This yunohost is set up with a DynDNS xyz.nohost.me. The current certificate is self-signed.
  • Certain other ports that yunohost wants (22,25,587,993,5222,5269) are also routed directly to 192.168.1.14 by the gateway mentioned above.
  • All of the above context is OK. Yunohost diagnostics says that DNS records are correctly configured for this domain. Everything is great (except reverse DNS lookup which is only relevant for outgoing email).

Before getting a proper certificate for the yunohost server and its domain, I need to make the yunohost reachable at all, and I don't see what I am missing.

What am I missing?

 

I mean, the simplest answer is to lay a new cable, and that is definitely what I am going to do - that's not my question.

But this is a long run, and it would be neat if I could salvage some of that cable. How can I discover where the cable is damaged?

One stupid solution would be to halve the cable and crimp each end, and then test each new cable. Repeat iteratively. I would end up with a few broken cables and a bunch of tested cables, but they might be short.

How do the pro's do this? (Short of throwing the whole thing away!)

 

edit: you are right, it's the I/O WAIT that it destroying my performance:
%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st
I could clearly see it using nmon > d > l > - such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it's simply my sdb1 drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that's just treating the symptom and not the root cause, so I should probably also look for that root cause. But that's for another Lemmy thread!

I really don't understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely.

Notably, the top command doesn't show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don't grok how a system can even get that high at all.

My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS.

How can I troubleshoot this, when 'top' doesn't show any culprit and it does not seem to be caused by any one specific container?

(this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn't "beefy" but a Pi would be so much less.)

 

Having ordered my first 3D printer, I am giddy and preparing various things.

I have installed Octoprint on my home server as a Docker container, but when running it, it seems that it wants to have a serial connection to a printer. Octoprint expects to be running on a Raspberry that is connected via its serial interface.

What am I missing?

The printer I ordered (Prusa Mini) comes with a wifi dongle, so I guess there will be a way to reach it over the network. But that does not automagically mean Octoprint can work with it.

 

TLDR: I consistently fail to set up Nextcloud on Docker. Halp pls?

Hi all - please help out a fellow self-hoster, if you have experience with Nextcloud. I have tried several approaches but I fail at various steps. Rather than describe my woes, I hope that I could get a "known good" configuration from the community?

What I have:

  • a homelab server and a NAS, wired to a dedicated switch using priority ports.
  • the server is running Linux, Docker, and NPM proxy which takes care of domains and SSL certs.

What I want:

  • a docker-compose.yml that sets up Nextcloud without SSL. Just that.
  • ideally but optionally, the compose file might include Nextcloud office-components and other neat additions that you have found useful.

Your comments, ideas, and other input will be much appreciated!!

 

TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?

 

Prove me wrong, please?

edit: thanks for all the great comments, this is really helpful. My main take-away is that it does work, but requires dry air. In humid conditions it doesn't really do anything.

Spouse bought this thing that claims to cool the air by blowing across some moist pads. It's about as large as a toaster, and it has a small water tank on the side. The water drips onto the bottom of the device, where it is soaked up by a sort of filter. A fan blows air through the filter.

  1. Spouse insists that the AIR gets cooled by evaporation.
  2. I say the FILTER gets cooled by evaporation.
  3. Spouse says the cooled filter then cools the air, so it works.
  4. I say the evaporation pulls heat (and water) from the filter, so the output is actually air that is both warmer and wetter than the input air. That's not A/C, that's a sauna. (Let's ignore the microscopic amount of heat generated by the cheap Chinese fan.)

By my reckoning, the only way to cool a ROOM is to transport the heat outside. This does not do that.

We can cool OURSELVES by letting a regular fan blow on us = WE are the moist filter, and the evaporation of our sweat cools us. One could argue that the slightly more humid air from this device has a better heat transfer capacity than drier air, but still, it is easier to sweat away heat in dry air than in humid air.

Am I crazy? I welcome your judgment!

view more: next ›