stardreamer

joined 1 year ago
[–] [email protected] 21 points 11 months ago* (last edited 11 months ago) (11 children)

Having a good, dedicated e-reader is a hill that I would die on. I want a big screen, with physical buttons, lightweight, multi-weeklong battery, and an e-ink display. Reading 8 hours on my phone makes my eyes go twitchy. And TBH it's been a pain finding something that supports all that and has a reasonably open ecosystem.

When reading for pleasure, I'm not gonna settle for a "good enough" experience. Otherwise I'm going back to paper books.

[–] [email protected] 3 points 11 months ago

Has anyone followed standards properly? There are weird workarounds in Linux's TCP implementation because they had to do the same non-standard workarounds as BSD which was added since there are too many buggy TCP implementations out there that will break if the RFC is followed to the letter...

[–] [email protected] 10 points 11 months ago* (last edited 11 months ago)

*Gasp* the registration is coming from inside the colo!

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago)

If we're nitpicking about AMD: another thing I dislike about them is their smaller presence in the research space compared to their competitors. Both Intel and NVIDIA throw money into risky new ideas like crazy (NVM, DPUs, GPGPUs, P4, Frame Generation). Meanwhile, AMD seems to only hop in once a specific area is well established to have an existing market.

For consumer stuff, AMD is definitely my go-to. But it occurs to me that we need companies that are willing to fund research in Academia. Even if they don't have a super good track record of getting profitable results.

[–] [email protected] 4 points 11 months ago

The Mobile port for Final Fantasy Tactics is still superb. The UI is a bit outdated but the strategy game itself is not.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago) (1 children)

And I did the same as a kid in the late 2000s in order to play World of Warcraft. Found someone's info on a random online dump, filled it in and didn't think more about the id theft. What I then learned is that there is NO "fake" IDs that can pass this test. It's just plain old ID theft of actual people.

The ID itself is encoded as 3-digit city/3-digit district/8-digit dob/and 4 random digits. There is no "generated" name that works with a specific ID since the name isn't encoded anywhere. Most reputable vendors perform the check backed by an actual government DB.

The problem is that it IS the exact same info used to apply for bank accounts, loans, mobile phone numbers, etc. And nobody bats an eye when a pirated gaming app asks for it. This could be legitimate, but I'm more willing to say this is someone's ID collection scheme. If that's the case, it could be doing more than just collecting IDs (cause why not?) or it's at least facilitating more ID theft.

[–] [email protected] 1 points 11 months ago (3 children)

Btw this is most likely a scam. This is the equivalent of asking for your name, DOB, and SSN on a random app you found (the ID contains both location and DOB). Even if you have an actual ID DO NOT FILL THIS OUT. Delete, purge, and move on.

[–] [email protected] 1 points 11 months ago

Haskell is still as beautiful as the day it was first made.

Except for class methods. We don't talk about methods.

[–] [email protected] 39 points 1 year ago* (last edited 1 year ago) (1 children)

The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

[–] [email protected] 2 points 1 year ago

Also if the router blocks icmp for some reason you can always manually send an ARP request and check the response latency.

[–] [email protected] 2 points 1 year ago (1 children)

This is solving a problem we DO have, albeit in a different way. Email is ancient, the protocol allows you to self identify as whoever you want. Let's say I send an email from the underworld (server ip address) claiming I'm Napoleon@france (user@domain), the only reason my email is rejected is because the recipient knows Napoleon resides on the server France, not underworld. This validation is mostly done via tricky DNS hacks and a huge part of it is built on top of Google's infrastructure. If for some reason Google decides I'm not trustworthy, then it doesn't matter if I'm actually sending Napoleon's mail from France, it's gonna be recognized as spam on most servers regardless.

A decentralized chain of trust could potentially replace Google + all these DNS hacks we have in place. No central authority gets to control who is legitimate or not. Of all the bs use cases of block chain I think this one doesn't seem that bad. It's building a decentralized chain of trust for an existing decentralized system (email), which is exactly what "block chain" was originally designed for.

[–] [email protected] 1 points 1 year ago

Is there a specific reason you're looking at shadowsocks? The original developer has been MIA for years. People who used it in the past largely consider it insecure for its original stated purpose

trojan-gfw is a better modern replacement. However that requires a certificate in order to work. You can easily get one via lets encrypt.

At this point, let Shadowsocks, obfs, and kcp die a graceful death like GoAgent before it did.

view more: ‹ prev next ›