excel

joined 1 year ago
[–] [email protected] 32 points 2 months ago* (last edited 2 months ago) (24 children)

If you’re branching logic due to the existence or non-existence of a field rather than the value of a field (or treating undefined different from null), I’m going to say you’re the one doing something wrong, not the Java dev.

These two things SHOULD be treated the same by anybody in most cases, with the possible exception of rejecting the later due to schema mismatch (i.e. when a “name” field should never be defined, regardless of the value).

[–] [email protected] 12 points 3 months ago (3 children)

Buy a keyboard and monitor

[–] [email protected] 22 points 3 months ago (5 children)

And the only thing even worse than SCRUM is literally every other option

[–] [email protected] 27 points 7 months ago (1 children)

What this shows is how terrible raw JS is, when all of this crap is required to fix all of the edge cases and make things actually work the way it’s supposed to.

[–] [email protected] 13 points 11 months ago (2 children)

If you get ghosted, it only proves that that person is emotionally immature and wasn’t ready for a relationship anyway, so they did you a favor by outing themself.

[–] [email protected] 15 points 11 months ago (5 children)

This is not useful now, nor will something like this ever be useful.

[–] [email protected] 92 points 1 year ago (4 children)

The damage was not the actual pricing (which was cheaper than Unreal), the reason people are going to leave for Unreal/Godot and never come back is the loss of trust. Nobody wants to be chained down to a company that’s willing to pull the rug out like this.

[–] [email protected] 3 points 1 year ago

Except Unreal already had the same kind of pricing structure that Unity is trying to move towards, that’s why Unity thought they could get away with it.

[–] [email protected] 117 points 1 year ago (8 children)

If you think they had impenetrable security before this, I’ve got some bad news for you…

[–] [email protected] -1 points 1 year ago

Any time a gaming company does something stupid, leave it to gamers to out-stupid the company and prove that they deserved to get shit on in the first place

[–] [email protected] 1 points 1 year ago

Vivaldi will never have it

[–] [email protected] 1 points 1 year ago

No, it’s not in Vivaldi

 

I keep seeing posts about this kind of thing getting people's hopes up, so let's address this myth.

What's an "AI detector"?

We're talking about these tools that advertise the ability to accurately detect things like deep-fake videos or text generated by LLMs (like ChatGPT), etc. We are NOT talking about voluntary watermarking that companies like OpenAI might choose to add in the future.

What does "effective" mean?

I mean something with high levels of accuracy, both highly sensitive (low false negatives) and highly specific (low false positives). High would probably be at least 95%, though this is ultimately subjective.

Why should the accuracy bar be so high? Isn't anything better than a coin flip good enough?

If you're going to definitively label something as "fake" or "real", you better be damn sure about it, because the consequences for being wrong with that label are even worse than having no label at all. You're either telling people that they should trust a fake that they might have been skeptical about otherwise, or you're slandering something real. In both cases you're spreading misinformation which is worse than if you had just said "I'm not sure".

Why can't a good AI detector be built?

To understand this part you need to understand a little bit about how these neural networks are created in the first place. Generative Adversarial Networks (GANs) are a strategy often employed to train models that generate content. These work by having two different neural networks, one that generates content similar to existing content, and one that detects the difference between generated content and the existing content. These networks learn in tandem, each time one network gets better the other one also gets better.

That this means is that building a content generator and a fake content detector are effectively two different sides of the same coin. Improvements to one can always be translated directly and in an automated way into improvements into the other one. This means that the generator will always improve until the detector is fooled about 50% of the time.

Note that not all of these models are always trained in exactly this way, but the point is that anything CAN be trained this way, so even if a GAN wasn't originally used, any kind of improved detection can always be directly translated into improved generation to beat that detection. This isn't just any ordinary "arms race", because the turn around time here is so fast there won't be any chance of being ahead of the curve... the generators will always win.

Why do these "AI detectors" keep getting advertised if they don't work?

  1. People are afraid of being saturated by fake content, and the media is taking advantage of that fear to sell snake oil
  2. Every generator network comes with its own free detector network that doesn't really work all that well (~50% accuracy) because it was used to create the generator originally, so these detectors are ubiquitous among AI labs. That means the people that own the detectors are the SAME PEOPLE that created the problem in the first place, and they want to make sure you come back to them for the solution as well.
view more: next ›