Generative AI is not going back into the bag.
It probably will, though, once model collapse sets in.
That's the irony, really... the more successful it is, the sooner it'll poison itself to death.
Generative AI is not going back into the bag.
It probably will, though, once model collapse sets in.
That's the irony, really... the more successful it is, the sooner it'll poison itself to death.
Because it's metal as fuck, that's why. Electromagnetic equivalent of a sonic boom.
Also, it's pretty:
Impressive how..? It's just statistics-based very slightly fancier autocomplete...
And useful..? It's utterly useless for anything that requires the text it generates to be reliable and trustworthy... the most it can be somewhat reliably used for is as a somewhat more accurate autocomplete (yet with a higher chance for its mistakes to go unnoticed) and possibly, if trained on a custom dataset, as a non-quest-essential dialogue generator for NPCs in games... in any other use case it'll inevitably cause more harm than good... and in those two cases the added costs aren't remotely worth the slight benefits.
It's just a fancy extremely expensive toy with no real practical uses worth its cost.
The only people it's useful to are snake oil salesmen and similar scammers (and even then only in the short run, until model collapse makes it even more useless).
All it will have achieved in the end is an increase in enshittification, global warming, and distrust in any future real AI research.
I think this is still fundamentally misunderstood widely.
The fact that it's being sold as artificial intelligence instead of autocomplete doesn't help.
Or Google and Microsoft trying to sell it as a replacement for search engines.
It's malicious misinformation all the way down.
LLM models can't be updated (i.e., learn), they have to be retrained from scratch... and that can't be done because all sources of new information are polluted enough with AI to cause model collapse.
So they're stuck with outdated information, or, if they are being retrained, they get dumber and crazier with each iteration due to the amount of LLM generated crap on the training data.
So, they've basically accidentally (or intentionally) made Eliza with extra steps (and many orders of magnitude more energy consumption).
Cats are obligate carnivores with an excellent sense of smell, evolved to eat freshly hunted meat and little else, who'll have to be very hungry before they eat anything remotely past due date.
We're omnivores who'll eat pretty much anything including stuff that'd kill most other animals that'd try to eat it (seriously, look up the long lists of “normal” foods you can't feed your pets because they'd kill them); we call deadly toxins that plants have evolved over hundreds of millions of years to be as inedible as possible “spices” and “drugs”, and consume them for fun. We'll let perfectly good food rot and ferment for months before we eat it because it somehow makes it better for our tastes.
No, we're most definitely not the picky eaters here, not even when compared to dogs, much less when compared to cats.
As for the ocean, everything in it comes with concentrations of mercury and other heavy elements and industrial waste that are harmful even to us, extremely high percentages of microplastics, and a vast variety of parasites that require anything we get from the ocean to be flash frozen before it can be considered safe to eat (if we ignore the heavy metals and plastics and other shit).
Plus, of course, every bit of crap ever produced on the planet ends up there... if homeopathy was real ocean water would be a fucking universal panacea, the amount of shit it's got dissolved in it.
I've always assumed most of the “food” we get from the big liquid dumpster we call sea wouldn't be sellable (to humans or other animals) if anything remotely resembling quality control applied to it... if anything, I'd assume the least worst bits go to the cats, since they're much pickier eaters than us, and have less tolerance for toxins...
some cat food is indistinguishable from canned tuna
This might be saying more about canned tuna than about cat food... (and I love canned tuna).
This article seems to put the blame on the shockwave from Starship's rapid unscheduled disassembly in the upper atmosphere (not its launch) but there's also been recent warnings about the effects of metal particulates from such explosions, satellites burning in the atmosphere, and similar pollution on the ionosphere.
All in all, burning or blowing up metallic crap in the upper atmosphere seems to be quite a bad idea.
These effects may be troublesome, but they are short-lived; re-ionization occurs as soon as the sun comes up again.
The problem is when you've got enough short lived microsatellites and Starlink-like constellations and whatnot that you've practically got a whole Kessler's syndrome of the damn things constantly burning up in whatever's left of the ionosphere...
The good thing about that is that this kills the LLMs, since new models can only be trained on this LLM generated gibberish, which makes the gibberish they'll generate even more garbled and useless, and so on, until every model you try to train can only produce random useless unintelligible garbage.