rentar42

joined 1 year ago
[–] [email protected] 1 points 5 months ago

I second that. This practice comes from a time where domain names were expensive, in many ways: SNI didn't exist/wasn't wide-spread, so each domain name on HTTPS needed a dedicated IP, Certificates weren't democratized yet via letsencrypt/acme and most hosts were big enough to run multiple services, because virtualization wasn't as widely available yet. So putting apps on sub-paths made sense.

Now all of those things are basically dealt with and putting each app on its own sub-domain just makes way more sense.

[–] [email protected] 20 points 6 months ago (1 children)

First: love that that's a thing, but I find the blog post hilarious:

We believe this choice must include the one to migrate your data to another cloud provider or on-premises. That’s why, starting today, we’re waiving data transfer out to the internet (DTO) charges when you want to move outside of AWS.

and later

We believe in customer choice, including the choice to move your data out of AWS. The waiver on data transfer out to the internet charges also follows the direction set by the European Data Act and is available to all AWS customers around the world and from any AWS Region.

But sure: it's out of their love for customer choice that they offer this now. The fact that it also fulfills the requirements by the EDA is purely coincidental, they would have done it for sure.

Remember folks: regulation works. Sometimes corporations need the state(s) to force their hand to do the right thing.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (1 children)

I went with iDrive e2 https://www.idrive.com/s3-storage-e2/ 5 TB is 150$/year (50% off first year) for S3-compatible storage. My favorite part is that there are no per-request, ingress or egress costs. That cost is all there is.

[–] [email protected] 37 points 6 months ago (1 children)

without trusting anyone.

Well, except of course the entity that gave you the hardware. And the entity that preinstalled and/or gave you the OS image. And that that entity wasn't fooled into including malicious code in some roundabout way.

like it or not, there's currently no real way to use any significant amount of computing power without trusting someone. And usually several hundreds/thousands of someones.

The best you can hope for is to focus the trust into a small number of entities that have it in their own self interest to prove worthy of that trust.

[–] [email protected] 18 points 6 months ago* (last edited 6 months ago) (4 children)

Like many other security mechanisms VLANs aren't really about enabling anything that can't be done without them.

Instead it's almost exclusively about FORBIDDING some kinds of interactions that are otherwise allowed by default.

So if your question is "do I need VLAN to enable any features", then the answer is no, you don't (almost certainly, I'm sure there are some weird corner cases and exceptions).

What VLANs can help you do is stop your PoE camera from talking to your KNX and your Chromecast from talking to your Switch. But why would you want that? They don't normally talk to each other anyway. Right. That "normally" is exactly the case: one major benefit of having VLANs is not just stopping "normal" phone-homes but to contain any security incidents to as small a scope as possible. Imagine if someone figured out a way to hack your switch (maybe even remotely while you're out!). That would be bad. What would be worse is if that attacker then suddenly has access to your pihole (which is password protected and the password never flies around your home network unencrypted, right?!) or your PC or your phone ...

So having separate VLANs where each one contains only devices that need to talk to each other can severely restrict the actual impact of a security issue with any of your devices.

[–] [email protected] 1 points 6 months ago

Since most of those are run commercially and don't make their data easily accessible, that'll be a much different process, I assume. You'll basically have to scrape them like any other web site, except you'll specifically be targeting the edit/source view pages. Then find a wiki implementation that has as close a syntax as possible to the one they use (that could be tricky ...) and upload there. So unless you happen to find some code from someone who wanted to do the exact same thing, I'm afraid this would involve quite some programming/scripting.

[–] [email protected] 1 points 6 months ago

Oh, I'm 100% there with you on syntax. But having multiple pieces of software that support the same syntax seems useful.

Personally I've turned into more markdown kind of person rather than the traditional wiki syntax. And at least that one gained some level of standardization over time ...

[–] [email protected] 34 points 6 months ago (7 children)

I'm sorry that my attempt to find out what you want to be able to provide useful help annoyed you.

[–] [email protected] 87 points 6 months ago* (last edited 6 months ago) (20 children)

Without any text it's really hard to guess what you want and that's why you get so many different answers.

Do you want to

Note that I suspect you actually want the third one, in which case I suggest you avoid MediaWiki. Not because it's bad, but because it's almost certainly overkill for your use-case and there's way simpler, easier-to-setup-and-maintain systems with fewer moving parts out there.

[–] [email protected] 2 points 6 months ago

Increase the attack surface compared to what? If you don't allow/enable any access to services inside your network from outside, then by definition you have fewer attack surfaces than if you add a VPN to that empty list.

So trivially the answer is "yes, it adds an attack surface".

But what are the alternatives? If you directly expose each individual service on a dedicated port, for example, then you'd add many more (and usually less well hardened) attack surfaces instead.

So if the comparison is "expose 5 web-based services directly" vs. "expose one VPN like wireguard", then the second option is almost always the clear winner when it comes to security (and frequently also when it comes to ease of setup as well as comfort).

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago)

This isn't specific to just netdata, but I frequently find projects that have some feature provided via their cloud offering and then say "but you can also do it locally" and gesture vaguely at some half-written docs that don't really help.

It makes sense for them, since one of those is how they make money and the other is how they loose cloud customers, but it's still annoying.

Shoutout to healthcheck.io who seem to provide both nice cloud offerings and a fully-fledged server with good documentation.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (1 children)

I've not found a good solution for actual constant monitoring and I'll be following this thread, but I have a similar/related item: I use healthcheck.io (specifically a self-hosted instance) to verify all my cron jobs (backups, syncs, ...) are working correctly. Often even more involved monitoring solutions do not cover that area (and it can be quite terrible if it goes wrong), so I think it'll be a good addition to most of these.

view more: next ›