hydroptic

joined 1 year ago
[–] [email protected] 12 points 4 months ago

It's almost like the rules don't apply to the moneyed class

[–] [email protected] 36 points 4 months ago* (last edited 4 months ago) (4 children)

Considering that C-suite executives are usually fantastically expensive, they'd be a logical position to automate (assuming AI worked like suits think it does). For some veeeery strange reason no board of directors has suggested replacing themselves with AIs

[–] [email protected] 11 points 4 months ago (3 children)

Wait, they're really doing that?

[–] [email protected] 4 points 4 months ago

This shit is so bad that even a blind guy can see it.

You severely underestimate the shortsightedness of the executive class. They're usually so convinced of their infallibility that they absolutely will make decisions that are obviously terrible to anyone looking in from the outside

[–] [email protected] 7 points 4 months ago (2 children)

I doubt there's any sort of 4D chess going on, instead of the whole thing being brought about by short-sighted executives who feel like they have to do something to show that they're still in the game exactly because they're so much behind "Open"AI

[–] [email protected] 1 points 4 months ago

This combined with the meteoric rise of fascism absolutely leave me thinking that I'll probably end up in a concentration camp

[–] [email protected] 161 points 4 months ago (22 children)

And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less

[–] [email protected] 109 points 4 months ago (8 children)
[–] [email protected] 31 points 4 months ago (1 children)

The vast majority of AI Overviews provide high quality information

According to some fuckwitted Google rep, and I wouldn't trust them any further than I could throw them.

[–] [email protected] 2 points 4 months ago

how to build Teller-Ulam design in shed

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago)

Well, yes and no. With a straight-up hash set, you're keeping set_size * bits_per_element bits plus whatever the overhead of the hash table is in memory, which might not be tenable for very large sets, but with a Bloom filter that has eg. ~1% false positive rate and an ideal k parameter (number of hash functions, see eg. the Bloom filter wiki article) you're only keeping ~10 bits per element completely regardless of element size because they don't store the elements themselves or even their full hashes – they only tell you whether some element is probably in the set or not, but you can't eg. enumerate the elements in the set. As an example of memory usage, a Bloom filter that has a false positive rate of ~1% for 500 million elements would need 571 MiB (noting that although the size of the filter doesn't grow when you insert elements, the false positive rate goes up once you go past that 500 million element count.)

Lookup and insertion time complexity for a Bloom filter is O(k) where k is the parameter I mentioned and a constant – ie. effectively O(1).

Probabilistic set membership queries are mainly useful when you're dealing with ginormous sets of elements that you can't shove into a regular in-memory hash set. A good example in the wiki article is CDN cache filtering:

Nearly three-quarters of the URLs accessed from a typical web cache are "one-hit-wonders" that are accessed by users only once and never again. It is clearly wasteful of disk resources to store one-hit-wonders in a web cache, since they will never be accessed again. To prevent caching one-hit-wonders, a Bloom filter is used to keep track of all URLs that are accessed by users. A web object is cached only when it has been accessed at least once before, i.e., the object is cached on its second request.

view more: ‹ prev next ›