snek_boi

joined 3 years ago
[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago)

It sounds like you really care about fairness, in the sense of giving credit to the hard work behind learning. Do you know the phrase “dead metaphor”?

[–] [email protected] 22 points 3 weeks ago

Came here to say this. I would like to know the definition (and its theory behind) to have a conversation about it, but I won’t watch three hours of a video to get the answer (or not!).

[–] [email protected] 3 points 1 month ago

Totally. The history of intelligence has sadly also been the story of eugenics. Fortunately, there have been process-based theories and contextual theories that have defined intelligence in more humane and useful ways. In this view, IQ tests do not measure an underlying characteristic, but a set of mental skills. Seen this way, intelligence becomes something people can gain with nurturance. If you’re interested, check out Relational Frame Theory.

[–] [email protected] 1 points 1 month ago (1 children)

Ah, I see how my wording was confusing. I mean planning in the sense of “How will we complete the work that we already committed to?” and “What will we do today to achieve our Sprint goal?”

I arrived at the word planning because Scrum is sometimes described as a planning-planning-feedback-feedback cycle. You plan the Sprint, you plan daily (Daily Scrums), you get feedback on your work (Sprint Review), and you get feedback on your process (Sprint Retrospective).

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (4 children)

lol I hope your standups are not actually like this! The purpose is to, as a team, plan what the team will do today to achieve the Sprint goal

[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (1 children)

Professionals have large networks of neurons. They are sturdy and efficient from repeated use. Memory palaces help to start the construction of these large networks of neurons. Afterwards, as another commenter noted, the knowledge is deeply processed. Mnemonics are replaced by networks of meaning. It is no longer “This algorithm rhymes with tomato”, but “This algorithm is faster if the data is stored in faster hardware, but our equipment is old so we better use this other algorithm for now”.

Broadly, the progression of learning is: superficial learning, deep learning, and transfer. Check out Visible Learning: The Sequel by John Hattie for more on this.

Edit: To directly answer your question, experts have so many sturdy neural hooks on which to hang new knowledge that mnemonics become less and less necessary. Mnemonics may be particularly helpful when first learning something challenging, but are less necessary as people learn.

You could also check out a paradox called the expert paradox. We used to think memory is boxes that get filled. This idea was directly challenged by Craik and Lockhart’s Levels of Processing. Levels of processing supports the idea that “the more you know, the faster you learn”. Note that this is domain-specific. In other words, an expert in dog training won’t learn quantum mechanics faster than anyone else.

[–] [email protected] 3 points 1 month ago

I’d say feeling admiration for others. People who are kind, patient, insightful, and critical thinkers. People who look at how political goods (including wealth) are distributed and can think critically about it. Nutomic and Dessalines for sure.

[–] [email protected] 12 points 2 months ago

I see your concern for truth in any scenario, and I agree validity should be a constant consideration! However, bias and astroturfing are different. Bias is the lens that we use to look at reality. Astroturfing is forcing lenses onto many others without them knowing. It is a deliberate campaign.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

This is accurate for neoclassical economics. However, I wonder how the comic would change with a nuanced understanding of how neoclassical economics differs from classical economics.

[–] [email protected] 17 points 3 months ago (3 children)

Ultimately, yeah. The article points out that the way they want to do it is with unique designs, carbon neutrality, and transparency in the production chain.

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago)

I agree that we shouldn't jump immediately to AI-enhancing it all. However, this survey is riddled with problems, from selection bias to external validity. Heck, even internal validity is a problem here! How does the survey account for social desirability bias, sunk cost fallacy, and anchoring bias? I'm so sorry if this sounds brutal or unfair, but I just hope to see less validity threats. I think I'd be less frustrated if the title could be something like "TechPowerUp survey shows 84% of 22,000 respondents don't want AI-enhanced hardware".

[–] [email protected] 81 points 3 months ago (8 children)

I MISSED THE EQUIVALENT OF PLACE IN LEMMY? Does anyone have context?

 
 
 

Hey.

My brother will buy his very first laptop soon. He was saving for a MacBook Pro, but hearing me go on about Apple being PRISM-compliant and about how open source software is awesome, he's open to new options.

His main argument to buy an M1 is that there is currently no chip nearly as good (in terms of energetic efficiency). And I see that he has a point there.

However, I was also kinda hoping he'd use his savings for a Framework laptop running Linux. Regarding those computers, my biggest hope is that they'll eventually run good RISC-V chips, chips that can be easily be changed with a simple module change. But that may be a long time from now, maybe decades.

Another option I thought about was him buying the M1 and fighting his way to install a Linux distro that supports all the M1 MacBook hardware. He'll have a really fast and efficient chip, as well as a good system!

But the main objection for this is that the M1 is not really future proof... like, it is guaranteed that in the next two years the much better M2 will be put into the MacBook Pro. That improvement isn't trivial; it'll be a 20% reduction in transistor size. But apart from quick changes, it's possible that the novelty of the M1 is problematic. For example, I was reading about a vulnerability in the M1s because of not having adopted a particular instruction set in the very basic operations of the chip. It's almost as if this M1 is an early-adoption technology, if that makes sense.

Anyway, those are the considerations that I have about my brother's computer... hopefully we'll have more clarity by the time his classes begin. Do you have anything that could help us achieve that clarity? Or even muddle the waters a bit more in an interesting way 🙃?

Edit:

Thanks for all the comments! They spurred lots of discussion and some changes of hearts!

So, I was really looking forward to getting a Linux-first machine, but two things happened.

One was that there were few options (due to the chip shortage probably?): System76 Pangolin not available, TUXEDO quite expensive (and only integrated or Nvidia graphics), Slimbook Titan quite expensive, Slimbook X15 without dedicated graphics (or Nvidia I forget which).

The other thing that happened was a friend having us consider the possibility of getting a pure-AMD machine. Since AMD has open source drivers (unlike Nvidia), they will probably work with Linux without much of a hassle. He'd also keep having the option of a dual-boot with Windows, to work with non-Linux software (in case he needs that for school). Such computers could be those with the 'AMD advantage' (AMD CPU and GPU), though they're a bit pricey. Yet this is his money and he's very excited about gaming in them!

This is the most likely route. So, no longer Apple. I would've liked to support Linux-first machines, but I guess AMD was the winner here?

view more: next ›