TCB13

joined 2 years ago
[–] [email protected] -1 points 3 weeks ago

The price lol yet another scam. Like the first version.

[–] [email protected] 0 points 2 months ago

Debian repositories include the dav module by default. Not sure about what’s going on with docker.

[–] [email protected] 1 points 2 months ago (2 children)

Nginx is easy to setup as WebDAV server.

[–] [email protected] 5 points 2 months ago (3 children)

All his files are secure and properly synced... unlike Nextcloud.

[–] [email protected] 2 points 3 months ago (5 children)

Spamassassin is useless these days, you better be using rspamd.

https://workaround.org/ispmail-bookworm/catching-spam-with-rspamd/

[–] [email protected] 2 points 3 months ago (1 children)

Some people can't because they need updated proofing tools and that version no longer has updates.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago)

They do lock you in on handheld devices but that seems to be a consequence of the fact that they are storing all emails encrypted on the server. After reading this link (“[…]Since IMAP can’t decrypt your emails[…]”), I agree that they are just implementing PGP with an extra steps and creating an unneeded layer (the bridge).

Yes, that's precisely the problem there. You can use PGP with any generic IMAP provider and that will work just fine with handheld devices. There are multiple mail clientes capable of doing and all your mail is still encrypted on the server. Proton just made an alternative implementation that forces you into proprietary systems because it's more convenient for them.

Those kinds of setups with servers encrypting your mail and still delivering over IMAP are fairly easy to implement, here's an example. They simply decided to go all proprietary.

The reason I would not compare it to XMPP is because they are still using SMTP. It is when they stop using SMTP or force others to use something e

On a generic mail system SMTP is used in two places: 1) from your mail client to your provider and 2) between your provider and other providers. Proton is NOT using SMPT for the first step, making it non-standard and much more closed.

[–] [email protected] 1 points 3 months ago (2 children)

I want to learn about PGP and how to encrypt email. Someone sells that service, great. And it is not like I cannot send normal emails to anyone else.

I don't disagree with you, I believe it as well. PGP is it stands is cumbersome.

The thing is that could've still implemented a easy-to-use, "just login and send email" type of web client and abstracted the user from the PGP complexities while still delivering everything over IMAP/SMTP.

They are using the same standard, not some made up version of SMTP (when sending to other servers, I assume any email from client A to client B both being Proton customer never leave their server, so no need for a new protocol).

You assume correctly, but when your mail client is trying to send an email instead of using SMTP to submit to their server, you're using a proprietary API in a proprietary format and the same goes for receiving email.

This is well documented and to prove it further if you want to configure Proton in a generic mail client like Thunderbird then you're required to install a "birdge", a piece of software that essentially simulates a local IMAP and SMPT server (that Thunderbird communicates with) and then will convert those requests into requests their proprietary API understands. There are various issues with this approach the most obvious one is that it is an extra step, there's also the issue that in iOS for eg. you're forced to use their mail app because you can't run the bridge there.

The bridge is an afterthought to support generic email clients and generic protocols, only works how and where they say it should work and may be taken away at any point.

while being fully open source using open standards

Delivering your data over proprietary APIs doesn't count as "open standards" - sorry.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago)

Would it be inaccurate to say that your fear is that Proton pulls an “Embrace, Extend, Extinguish” move?

No, it isn't. But they never "embraced" as there was never direct IMAP to their servers, instead it's a proprietary API serving data in a proprietary format.

I also see how that would make Proton like WhatsApp, which has its own protocol and locks its users in.

The problem isn't that taking down the bridge would make Proton like WhatsApp. It's the other way around, when they decided to build their internals with proprietary protocols and solutions instead eg. IMAP+SMTP they became the WhatsApp. Those things shouldn't be addons or an afterthought, they should be bult into the core.

This clearly shows that making open solutions ranks very low their company and engineering priority list. If it was at the top they would've built it around IMAP instead.

I could download an archive of everything I have on Proton without a hitch.

Yes you can, but the data will come in more property formats hard to upload to anywhere else - at least for some of the data. They've improved this situation but it's still less than ideal. In the beginning they would export contacts and calendars in some JSON format, I see they moved to vCard and iCal now.

[–] [email protected] 1 points 3 months ago (1 children)

I work in another big4 company, and I have a strong feeling that your claims apply to us as well.

That's sad, but it is the world we live in.

[–] [email protected] 1 points 3 months ago (3 children)

Okay, here are a few thoughts:

  • Companies like blame someone when things go wrong, if they chose open-source there's isn't someone to sue then;
  • Buying proprietary stuff means you're outsourcing the risks of such product;
  • Corruption pushes for proprietary: they might be buying software that is made by someone that is close to the CTO, CEO or other decision marker in the company, an old friend, family or straight under the table corruption;
  • Most non-tech companies use services from consulting companies in order to get their software developed / running. Consulting companies often fall under the last point that besides that they have have large incentives from companies like Microsoft to push their proprietary services. For eg. Microsoft will easily provide all of a consulting companies employees with free Azure services, Office and other discounts if they enter in an exclusivity agreement to sell their tech stack. To make things worse consulting companies live of cheap developers (like interns) and Microsoft and their platform makes things easier for anyone to code and deploy;
  • Microsoft provider a cohesive ecosystem of products that integrate really well with each other and usually don't require much effort to get things going - open-source however, usually requires custom development and a ton of work to work out the "sharp angles" between multiple solutions that aren't related and might not be easily compatible with each other;
  • Open-source requires a level of expertise that more than half of the developers and IT professionals simply don't have. This aspect reinforces the last point even more. Senior open-source experts are more expensive than simply buying proprietary solutions;
  • If we consider the price of a senior open-source expert + software costs (usually free) the cost of open-source is considerable lower than the cost of cheap developers + proprietary solutions, however consider we are talking about companies. Companies will always prefer to hire more less expensive and less proficient people because that means they're easier to replace and you'll pay less taxes;
  • Companies will prefer to hire services from other companies instead of employees thus making proprietary vendors more compelling. This happens because from an accounting / investors perspective employees are bad and subscriptions are cool (less taxes, no responsibilities etc);
  • The companies who build proprietary solutions work really hard to get vendors to sell their software, they provide commissions, support and the promises that if anything goes wrong they'll be there. This increases the number of proprietary-only vendors which reinforces everything above. If you're starting to sell software or networking services there's little incentive for you to go pure "open-source". With less companies, less visibility, less professionals (and more expensive), less margins and less positive market image, less customers and lesser profits.

Unfortunately things are really poised and rigged against open-source solutions and anyone who tries to push for them. The "experts" who work in consulting companies are part of this as they usually don't even know how to do things without the property solutions. Let me give you an example, once I had to work with E&Y, one of those big consulting companies, and I realized some awkward things while having conversations with both low level employees and partners / middle management, they weren't aware that there are alternatives most of the time. A manager of a digital transformation and cloud solutions team that started his career E&Y, wasn't aware that there was open-source alternatives to Google Workplace and Microsoft 365 for e-mail. I probed a TON around that and the guy, a software engineer with an university degree, didn't even know that was Postfix was and the history of email.

[–] [email protected] 1 points 3 months ago

Yeah it's all about outsourcing the risk to someone.

 

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Air-gapped, fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

 

cross-posted from: https://lemmy.world/post/14398634

Unfortunately I was proven to be right about Riley Testut. He's yet another greedy person barely batter than Apple. After bitching to Apple to remove GBA4iOS from the App Store he's now leveraging Delta to force people into his AltStore.

Delta has finally made its way to the App Store. Additionally, the Delta developer has also published their alternative marketplace, AltStore, in the EU today.

If you're in the EU you'll only be able to get Delta on the AltStore and that requires:

This is complete bullshit he could've just launched Delta on the App Store in Europe as well but he decided not to.

Thanks Riley Testut for being a dick to the people that actually forced Apple into allowing alternative app stores in the first place.


Github issue related to this dick move: https://github.com/rileytestut/Delta/issues/292

 

Here's my take:

The domain aftermarket has a big problem... it exists. This market shouldn't ever be allowed to exist in the first place. ICANN should've blocked this bullshit a long time ago and forced registrars to just let domains expire and free the space. Also add a few provisions about unused domain names and about selling them.

 

Hello,

My IoT/Home Automation needs are centered around custom built ESPHome devices and I currently have them all connected to a HA instance and things work fine.

Now, I like HA's interface and all the sugar candy, however I don't like the massive amounts of resources it requires and the fact that the storage usage keeps growing and it is essentially a huge, albeit successful, docker clusterfuck.

Is there any alternative dashboard that just does this:

  1. Specifically made for ESPHome devices - no other devices required;
  2. Single daemon or something PHP/Python/Node that you can setup manually with a few systemd units;
  3. Connects to the ESPHome devices, logs the data and shows a dashboard with it;
  4. Runs offline, doesn't go into 24234 GitHub repositories all the time and whatnot.

Obviously that I'm expecting more manual configuration, I'm okay with having to edit a config file somewhere to add a device, change the dashboard layout etc. I also don't need the ESPHome part that builds and deploys configurations to devices as I can do that locally on my computer.

Thank you.

 

Hey,

For all of you that are running proper setups and use nftables to protect your servers be aware that pvxe/nftables-geoip now has the ability to generate IP lists by country.

This can be used to, for instance, drop all traffic from specific countries or the opposite, drop everything except for your own country.

https://github.com/pvxe/nftables-geoip/commit/c137151ebc05f4562c56e6802761e0a93ed107a2

Here's how you can block / track traffic from certain countries:

Previously you had to load the entire geoip DB containing multiple GB and would end up using a LOT of RAM. Those guides aren't yet updated to use the country specific files but it's just about changing the include line to whatever you've generated with pvxe/nftables-geoip.

 

cross-posted from: https://lemmy.world/post/8834324

I'm looking for an application (windows or maybe web) that can be used to combine images vertically and horizontally. I usually go with PhotoScape (screenshot) to for this but that's not free nor updated anymore. Important features for me are to be able to combine horizontally or vertically, set the number or rows or columns and have the ability to resize the final image.

Thank you.

89
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

The Banana Pi BPI-M7 single board computer is equipped with up to 32GB RAM and 128GB eMMC flash, and features an M.2 2280 socket for one NVMe SSD, three display interfaces (HDMI, USB-C, MIPI DSI), two camera connectors, dual 2.5GbE, WiFi 6 and Bluetooth 5.2, a few USB ports, and a 40-pin GPIO header for expansion.

 

cross-posted from: https://lemmy.world/post/7123708

In this article, you will discover the ISO images that Debian offers and learn where and how to download them. I’ll also provide some useful tips on how to use Jigdo to archive the complete Debian repository into ISO images.

 

tr:dr; he says "x86 took over the server market" because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one...

He's comparing two very different situations, more specifically eras. Developers aren't so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little "tax" to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86..

What are your thoughts?

-5
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Hello,

There's this website https://weather.ambient-mixer.com/the-perfect-storm that has a nice mixer of background sounds / ambient music.

I would like to know if it's possible to somehow possible to rip the player and all the music it allows on the channel mixers to use offline.

The same question also applies to those:

https://mynoise.net/NoiseMachines/rainNoiseGenerator.php https://mynoise.net/NoiseMachines/thunderNoiseGenerator.php https://mynoise.net/NoiseMachines/fireNoiseGenerator.php

Thank you.

 

After a few conversations with people on Lemmy and other places it became clear to me that most aren't aware of what it can do and how much more robust it is compared to the usual "jankiness" we're used to.

In this article I highlight less known features and give out a few practice examples on how to leverage Systemd to remove tons of redundant packages and processes.

And yes, Systemd does containers. :)

 

Hello,

I'm looking for a unit converter written in JS / client-side only that I can self-host / add to a bunch of tools I already use.

I was looking for a suggestion to get something similar to the good old https://joshmadison.com/convert-for-windows/ but that runs a browser.

Thank you for your suggestions.

view more: next ›