this post was submitted on 08 Dec 2023
622 points (96.4% liked)
Programmer Humor
32380 readers
742 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because languages need to be able to handle the very common edge cases where data sources don't return complete data.
Adding null coalescing to a null-safe language (like dart) is so much easier to read and infer the risk of handling null than older languages that just panic the moment null is introduced unexpectedly.
For old languages, null coalescing is a great thing for readability. But in general null is a bad concept, and I don't see a reason why new languages should use it. That, of course, doesn't change the fact that we need to deal with the nulls we already have.
How are we supposed to deal with null values though? It's an important concept that we can't eliminate without losing information and context about our data.
0 and "" (empty string/char) are very often not equivalent to null in my use cases and mean different things than it when I encounter them.
You could use special arbitrary values to indicate invalid data, but at that point you're just doing null with extra steps right?
I'm really lost as to how the concept isn't neccessary.
One alternative are monadic types like result or maybe, that can contain either a value or an error/no value.