GravelPieceOfSword

joined 1 year ago
[–] [email protected] 2 points 1 month ago

Because hosting costs money, and sustainable services need revenue sources.

News we read was put together by a team of journalists, editors, etc.

Video streaming takes a lot of storage, bandwidth, processing, licensing.

And so on.

Price gouging is bad, but reasonable income is necessary.

Billboard ads that don't target users and don't track effectiveness are dangerous financially for advertisers, and would pay much less to ad hosters.

Anonymous, aggregated tracking is a healthy compromise.

[–] [email protected] 1 points 1 month ago (2 children)

Kudos for putting together good reasons that you don't like PPA, while also acknowledging that Mozilla is trying to solve a problem.

Yours is one of the very few reasonable objections I've read IMO - when the PPA outrage first erupted, I read through how it worked. Unique ID + website unaware of interaction, but browser recognizing, then feeding it to an intermediate aggregator that anonymizes data by aggregating from multiple users without sharing their IDs, with the aim of trying to find a middle ground seems fair to me. Especially with the opt-out being so easy.

However, your points about classes clickbait encouragement, SEO feeding, and the uncertainty that this will solve the web spamminess as it is are valid concerns.

[–] [email protected] 17 points 3 months ago (2 children)

Sounds like dogs barking at/with each other in the night back when I was growing up. You'd hear the occasional how-how-hoooooww from one of them, and others would join in. Wolf'ish in some ways. The city I grew up in was much less crowded back then.

Now: I guess self driving cars fill in the void left by dogs not barking at each other anymore.

🐺


🚗

[–] [email protected] 5 points 5 months ago* (last edited 5 months ago)

Nominative determinism is pretty accurate. Steve Jobs did generate a lot of jobs. Bill Gates had a lot of gates to his name.

just in case it wasn't obvious

[–] [email protected] 2 points 7 months ago

Superlior you say? Superl!

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

If you want persistent messages, use a messaging app like another poster posted. KDE connect should work, but it doesn't work for my setup for some reason.

If you just need transient messages, which is more of my usecase, and lightweight sending, use pairdrop.

snapdrop and pairdrop app from fdroid for Android, pairdrop website in desktop.

You can just use the website instead of app on phone too.

Sending over LAN is local - it doesn't go outside your own network.

If devices are on same WiFi, no pairing required.

You can also send across networks by pairing.

Project GitHub repository

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Splunk is already very expensive to be honest, with their policy of charging based on indexed logs (hit by searches) as opposed to used logs, and the necessity for a lot of logs to be indexed for 'in case something breaks '. Bit of hearsay there - while I don't work for the team that manages indexing, I have had quite a few conversations with our internal team.

I was surprised we were moving from splunk to a lesser known proprietary competitor (we tried and gave up on elasticsearch years ago). Splunk is much more powerful for power users , but the cost of the alternative was 7-10 times less, and most users didn't unfortunately use splunk power user functionality to justify using it over the competitor.

Being a power user with lots of dashboards, my team still uses splunk for now, and I have background conversations to make sure we don't lose it, I think Cisco would lose out if they jacked up prices, I think they'd add value to their infrastructure offerings using splunk as an additional value add perhaps?

[–] [email protected] 1 points 1 year ago

Here's a slightly more detailed description of my debugging experience over the years (also includes that of many coworkers implicitly.. many of whom I've walked through the stages).

[–] [email protected] 46 points 1 year ago (4 children)

As someone who has done a lot of debugging in the past, and has also written many log analysis tools in the past, it's not an ether/or, they complement each other.

I've seen a lot of times logs are dismissed in these threads recently, and while I love the debugger (I'd boast that I know very few people who can play with gdb like I can), logs are an art, and just as essential.

The beginner printf thing is an inefficient learning stage that people will get past in their early careers after learning the debugger, but then they'll need to eventually relearn the art of proper logging too, and understand how to use both tools (logging and debugging).

There's a stage when you love prints.

Then you discover debuggers, you realize they are much more powerful. (For those of you who haven't used gdb enough, you can script it to iterate stl (or any other) containers, and test your fixes without writing any code yet.

And then, as your (and everyone else's) code has been in production a while, and some random client reports a bug that just happened for a few hard to trace events, guess what?

Logs are your best friend. You use them to get the scope of the problem, and region of problem (if you have indexing tools like splunk - much easier, though grep/awk/sort/uniq also work). You also get the input parameters, output results, and often notice the root cause without needing to spin up a debugger. Saves a lot of time for everyone.

If you can't, you replicate, often takes a bit of time, but at least your logs give you better chances at using the right parameters. Then you spin up the debugger (the heavy guns) when all else fails.

It takes more time, and you often have a lot of issues that are working at designed in production systems, and a lot of upstream/downstream issues that logs will help you with much faster.

 
[–] [email protected] 6 points 1 year ago

Now onto the four body problem!

42
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

How are y'all managing internal network certificates?

At any point in time, I have between 2-10 services, often running on a network behind an nginx reverse proxy, with some variation in certificates, none ideal. Here's what I've done in the past:

  • setup a CLI CA using openssl
    • somewhat works, but importing CAs into phones was a hassle.
  • self sign single cert per service
    • works, very kludgy, very easy
  • expose http port only on lo interface for sensitive services (e.g. pihole admin), ssh local tunnel when needed

I see easy-RSA seems to be more user friendly these days, but haven't tried it yet.

I'm tempted to try this setup for my local LAN facing (as exposed to tunnel only, such as pihole) services:

  • Get letsencrypt cert for single public DNS domain (e.g. lan.mydomain.org).. not sure about wildcard cert.
  • use letsencrypt on nginx reverse proxy, expose various services as suburls (e.g. lan.mydomain.org/nextcloud)

Curious what y'all do and if I'm missing anything basic.

I have no intention of exposing these outside my local network, and prefer as less client side changes as possible.

[–] [email protected] 12 points 1 year ago (1 children)

1000019130

Poor bot did its thing, but the article starts off in a way it can't handle well it seems.

 
view more: next ›