History would at least indicate that it's an ornithologist thing.
makyo
Here's the thing. You said a "dove is a pigeon."
Is it in the same family? Yes. No one's arguing that.
As someone who is a scientist who studies pigeons, I am telling you, specifically, in science, no one calls pigeons doves. If you want to be "specific" like you said, then you shouldn't either. They're not the same thing.
If you're saying "pigeon family" you're referring to the taxonomic grouping of Columbidae, which includes things from pigeons to doves to pigeons.
So your reasoning for calling a pigeons a dove is because random people "call the white ones doves?" Let's get gulls and pelicans in there, then, too.
Also, calling someone a human or an ape? It's not one or the other, that's not how taxonomy works. They're both. A dove is a dove and a member of the pigeon family. But that's not what you said. You said a dove is a pigeon, which is not true unless you're okay with calling all members of the pigeon family pigeons, which means you'd call doves, pigeons, and other birds pigeons, too. Which you said you don't.
It's okay to just admit you're wrong, you know?
Finally a feel-good story makes the news
IMO it's even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.
However it's really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they're telling to get more investment?
And I'd have a hard time believing it isn't mostly the latter because while LLMs have made some pretty impressive advancements, they still can't have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I'm relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it's familiar with the source material.
Honestly, I'll believe they're capable of advancing AI when we get an AI that can say 'I actually am not sure about that, let me do a search...' or something like that.
This guy capitalisms ^
For those who haven't had their coffee yet this morning - 'middle class' is yet another term they use to divide us and make us fight with each other instead of the real enemy
Why does that work?
Saved it but never thought it’d come in handy
The worst to me is everyone now including a shitty bag to put the product in. Like it MAYBE makes sense to include a case for travel headphones or something but no I do not need you to include something for me to put the external SSD drive in.
No that's very true, I had it look up leather repair shops not too long ago and it listed six completely fictional shops with fully fleshed out trip-advisor style blurbs for each one. It was hilariously convincing and a complete waste of my time. But it does seem like that happens less and less lately.
I'm extremely wary and nervous about how disruptive LLMs can/will be but one relief is just getting an answer directly for things instead of wading through page after page of SEO optimized BS. Just really nice when you can get a quick answer and get back to the things you want to be doing.
I suppose the AI overlords will screw that up somehow too but IMO it's at a brief moment of usefulness.
There are a lot of theories, even from the creators: https://mashable.com/article/how-long-phil-connors-stuck-in-groundhog-day-time-loop