AI doesn't grok anything. It doesn't have any capability of understanding at all. It's a Markov chain on steroids.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
..is how generative-AI haters redefine terms and move the goalposts to fight their cognitive dissonance.
Imagine believing that AI-haters are the ones who redefine terms and move goalposts to fight their cognitive dissonance.
Did you read the paper? Or at least have an llm explain it?
I read the abstract, and the connection to your title is a mystery. Are you using "grock" as in "transcendental understanding" or as Musk's branded AI?
No c, just grok, originally from Stranger in a Strange Land. But a more technical definition is provided and expanded upon in the paper. Mystery easily dispelled!
In that case I refer you to u/catloaf 's post. A machine cannot grock, not at any speed.
Thanks for clarifying, now please refer to the poster's original statement:
AI doesn't grok anything. It doesn't have any capability of understanding at all. It's a Markov chain on steroids.
We follow the classic experimental paradigm reported in Power et al. (2022) for analyzing “grokking”, a poorly understood phenomenon in which validation accuracy dramatically improves long after the train loss saturates. Unlike the previous templates, this one is more amenable to open-ended empirical analysis (e.g. what conditions grokking occurs) rather than just trying to improve performance metrics
Oh okay so they're just redefining words that are already well-defined so they can make fancy claims.
Well-defined for casual use is very different than well-defined for scholarly research. It's standard practice to take colloquial vocab and more narrowly define it for use within a scientific discipline. Sometimes different disciplines will narrowly define the same word two different ways, which makes interdisciplinary communication pretty funny.
It’s standard practice to take colloquial vocab and more narrowly define it for use within a scientific discipline.
No. It's not standard at all, especially when the goal is overtly misleading.
Sometimes different disciplines will narrowly define the same word two different ways
Maybe one or both disciplines is promoting bullshit.
Did you have a question?
Thanks for posting, please ignore the stochastic luddites 🙂
I appreciate it. I've had little luck engaging people in conversation about AI research in general. Since abandoning reddit, I've been litmus testing other platforms. I'm afraid this puts lemmy at about a 12. /shrug. Still better than reddit.
I couldn't think of a worse platform to try and discuss this topic than Lemmy. The consensus here is essentially that big companies = bad, AI companies = big, and thus AI = bad.
Know of any good platforms for this, by chance?
No, but probably the dedicated subreddit
You'd think, but not really. /r/chatgpt and a couple AI art subs are the only active ones. And "active" means bots and parrots in this case.
Well, I'm on Lemmy myself, so perhaps that's some sort of an indication of where I prefer to discuss thing with people in general, not just about AI. My list of blocked users is rather vast though, so a big part of the loudest haters are being filtered out from my feed. That surely contributes to the better experience here - or atleast less bad.
Definitely prefered it over there at reddit, but I'm a man of principle so I'm not going back either.