I've lost track, is AI a good thing today or a bad thing?
FaceDeer
For reasons I no longer remember, about 14 years ago I stumbled across The Endless Forest. I poked around on the web page a bit, decided it was interesting but not interesting enough to actually install, and moved on. A short while later I saw something on Reddit that I felt like posting a comment on, and so I created an account and this was the first username that popped to mind.
Quite some time later I got into modding a game called Minetest, and a very common element in its API is the "facedir" of a node - short for facing direction. I had a lot of people assume I'd drawn my name from that, but it was sheer coincidence.
It's currently 2024, so we're still okay. :)
The Eugenics War ran from 1992 to 1996, so I think we're probably okay.
They're rolling it out gradually, as is customary for routine updates.
I'm not sure why this is worthy of a headline, frankly. This is how Microsoft typically does these things. I guess it's the "...with AI involved somehow!" Bit in the title that makes it interesting? I expect that's going to get old fairly quickly.
Fediverse postings are probably also being used for AI training. Just so you won't be too shocked when it eventually comes out in the future.
Sora's capabilities aren't really relevant to the competition if OpenAI isn't allowing it to be used, though. All it does is let the actual competitors know what's possible if they try, which can make it easier to get investment.
Indeed, the level of obsession some people have with Elon Musk is kind of ridiculous.
Writing code to do math is different from actually doing the math. I can easily write "x = 8982.2 / 98984", but ask me what value x actually has and I'll need to do a lot more work and quite probably get it wrong.
This is why one of the common improvements for LLM execution frameworks these days is to give them access to external tools. Essentially, give it access to a calculator.
Exactly. Article looked fine to me, if it was AI-written then it did a good job.
It's not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.
OpenAI has already said they’re not making that publicly available for now
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Negative examples are just as useful to train on as positive ones.