A base plate that’s got a spring under it, except for a little nub that pokes the power button.
Terrible if you live in earthquake-prone areas.
Wait. Are we describing a bump stock for your computer?
A base plate that’s got a spring under it, except for a little nub that pokes the power button.
Terrible if you live in earthquake-prone areas.
Wait. Are we describing a bump stock for your computer?
I used to play 1v1 Ticket to Ride matches against my wife using the app.
As background: I’m not a very competitive gamer, but I’m decent at problem solving. When I first learned TtR, I played with fairly … great players. One of my friends was (is?) nationally ranked. They routinely beat the ever-loving crap out of me. I think of the dozens of games we’ve played, I have won maybe 10-20% of the time?
My wife isn’t bad at TtR, but she doesn’t see things the same way in terms of strategy.
We had this one game where I drew a bunch of short routes all over the map, which blocked her early in the game, and a series of lucky route draws lead me to connect them, inadvertently blocking her at least twice, including on the last play, where I was just dumping cars to end the game.
She was always a little upset when I beat her, but this time the discrepancy was so bad and she was so upset. I just stopped playing Ticket to Ride - like, at all.
That’s when my mom became old enough to vote, and she was a real idiot.
It’s not, you know, the best theory, but it’s the best theory I can come up with while also doing no research.
You say “Not even close.” in response to the suggestion that Apple’s research can be used to improve benchmarks for AI performance, but then later say the article talks about how we might need different approaches to achieve reasoning.
Now, mind you - achieving reasoning can only happen if the model is accurate and works well. And to have a good model, you must have good benchmarks.
Not to belabor the point, but here’s what the article and study says:
The article talks at length about the reliance on a standardized set of questions - GSM8K, and how the questions themselves may have made their way into the training data. It notes that modifying the questions dynamically leads to decreases in performance of the tested models, even if the complexity of the problem to be solved has not gone up.
The third sentence of the paper (Abstract section) says this “While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics.” The rest of the abstract goes on to discuss (paraphrased in layman’s terms) that LLM’s are ‘studying for the test’ and not generally achieving real reasoning capabilities.
By presenting their methodology - dynamically changing the evaluation criteria to reduce data pollution and require models be capable of eliminating red herrings - the Apple researchers are offering a possible way benchmarking can be improved.
Which is what the person you replied to stated.
The commenter is fairly close, it seems.
That’s very fair, indeed.
Perhaps awareness of one will spark awareness of the other. I suppose my concern is that plasticisers are sort of a ‘hidden’ risk, for the most part. They’re used in nearly every food packaging (and prep, such as hoses) that isn’t contained in glass, or served up in its own peel.
Microplastics are terrifying and all that, but I’m sort of more worried about plasticisers like BPA, BPF, BPS and the rest of the alphabet of BP-whatever’s that was created and brought into use after the dangers of BPA were realized.
Just a heads up - if something plastic says it’s BPA-free, it probably uses a different bisphenol compound that is less studied than BPA. And is likely as toxic (or even more toxic)!
But nobody ever talks about those, because science words.
Don’t be sorry to say that! I think the idea is pretty darn cute. When everyone tells you how amazingly stylish, practical, and clever you are, remember me!
(But take all the credit for the idea for yourself - unless some poor fashionless soul doesn’t like it, then definitely blame me for a bad suggestion.)
My wife has one of the neck strap ones, and she doesn’t like wearing it for the same reason. My brain just assumed they took one of the mounting plates from one of those and hooked it to one of those sproingy straps.
Remotes are tough. We have a dedicated holder that is just where each remote goes as soon as it is no longer touching a hand, because they otherwise do get lost. Despite that, I’ve even considered 3d printing an AirTag holder that I can glue to the remotes, although that would just mean pointing my phone at the couch while it tells me they’re somewhere ‘in there.’
I became disappointed when I zoomed in to realize she had a wallet chain and not a sproingy yellow coiled lanyard thing that was somehow attached to her phone. (Sorry, Amazon link: One of these)
I don’t know why. I guess I just thought the idea was kind of cute and fun. This dad-fucking, bacon grease swilling, subway texter uses a cute little bouncy cord thing to keep her phone handy, amidst an otherwise austere getup - just a zany detail to contrast with the rest. Alas. Just a boring ass wallet chain.
The problem is that now the first page of results is all AI garbage and wrong, so you’re not 100% sure at what point you’ve reached the sane internet.
Well. That’s it. Get the flamethrowers. Time to burn down the Amazon.
No. Not the one that’s already burning. The other one.
You’re one of the founding members of the greater Seattle area polycule, aren’t you?