I'd be interested in setting up the highest quality models to run locally, and I don't have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn't a sticking point as long as it's not unusably slow, but I do have access to an OpenAI subscription so there just wouldn't be much point with lower quality models except as a toy.
sleep_deprived
Well they said .NET Framework, and I also wouldn't be surprised if they more or less wrapped that up - .NET Framework specifically means the old implementation of the CLR, and it's been pretty much superseded by an implementation just called .NET, formerly known as .NET Core (definitely not confusing at all, thanks Microsoft). .NET Framework was only written for Windows, hence the need for Mono/Xamarin on other platforms. In contrast, .NET is cross-platform by default.
I was very intrigued by a follow-up to the recent numberphile video about divergent series. It was a return to the idea that the sum of the integers greater than zero can be assigned the value -1/12. There were some places this could be used, but as far as I know it was viewed as shaky math by a lot of experts.
As far as I recall the story goes something like this: now, using a new technique Terrence Tao found, a team was seemingly able to "fix" previous infinities in quantum field theory - there's a certain way to make at least some divergent series work out to being a real number, and the presenter proposed that this can be explained as the universe "protecting us" from the infinities inherent in the math.
It made me think about other places infinities show up in modern physics (namely, singularities in general relativity) and whether a technique something like this could "solve" them without a whole new framework like string theory is.
Even as an (older) zoomer in the US, this was never a thing for me. No one cared what phone you used. If you had an Android you wouldn't be in iMessage group chats but no one judged you for it.
Besides rendering bugs that may or may not be Safari's fault, I wanted to get uBlock Origin on an iPhone but it's not available, IIRC because the content blocking API is more restrictive than what uBlock is designed for.
It's very well documented that machine learning will have the same biases its training set does. Years ago this was a big deal when Google tried to use ML for hiring but it kept ending up racist.
The issue is not just that a bad update went out. Freak accidents can happen. Software is complicated and you can never be 100% sure. The problem is the specifics. A fat finger should never be able to push a bad update to a system in customers' hands, forget a system easily capable of killing people in a multitude of ways. I'm not quite as critical as the above commentor but this is a serious issue that should raise major questions about their culture and procedures.
This isn't just some website where a fat finger at worst means the site is down for a while (assuming you do the bare minimum and back up your db). This is a vehicle. That's what they meant about the CAN bus - not that that's really a concern when the infotainment system just gets bricked, but that they have such lax procedures around software that touches a safety-critical system.
Having systems in place to ensure only tested, known good builds are pushed is pretty damn basic safety practice. Swiss cheese model. If they can't even handle the basics, what other bad practices do they have?
Again, not that I think this is necessarily as bad as the other person - perhaps this is the only mistake they've made in their safety procedures and otherwise they're industry leaders - we don't know that yet. But this is extremely concerning and until proven otherwise should be investigated and treated as a very serious safety violation. Safety first.
No, and the above commentor is a little mixed up. While we originally thought the benefit of RISC CPUs was their smaller instruction set - hence the name - it's turned out that the gains really come from a couple other things common to RISC architectures. In x86 pretty much every instruction can reference memory directly, but in RISC architectures you can only do it from a few specific instructions. Modern RISC architectures actually tend to have a lot of instructions, so RISC means something more like "load/store architecture" nowadays.
Another big part of RISC architectures is they try to make instruction fetch+decode as easy as possible. x86 instructions are a nightmare to decode and that adds a lot of complexity and somewhat limits optimization opportunities. There's some more to it, like how RISC thinks about the job of the compiler, but in my experience load/store and ease of fetch+decode are the main differentiators for RISC.
More towards your question, a lot of the issues with running x86 programs on ARM (really running any program on a different architecture than it was compiled for) is that it will likely depend on very specific behaviors that may not be the same across architectures and may be computationally expensive to emulate. For some great write-ups about that kind of thing check out the Dolphin (Wii emulator) blog posts.
IIRC Stewart Grand Prix and then Jaguar Squad. Not an F1 guy though so could definitely be wrong.
If we stop doing business with SpaceX, we immediately demolish most of our capability to reach space, including the ISS until Starliner quits failing. Perhaps instead of trying to treat this as a matter of the free market we should recognize it as what it is - a matter of supreme economic and military importance - and force the Nazi fucker out.