TheHarpyEagle

joined 2 months ago
[–] [email protected] 2 points 1 month ago (1 children)

Yossarian is kind of a whiny bitch, but it's because he's trying to cover up his exhaustion and terror with anything that will keep him out of harm's way. What I liked about it was all of the silly jokes that come back to hit hard in the second half of the book.

[–] [email protected] 1 points 1 month ago (4 children)

Oh hey, I'm reading The Martian right now! Also loved Project Hail Mary by the same author, Andy Weir. It's a bit more fantastical and just a great read.

[–] [email protected] 2 points 1 month ago (1 children)

God, it's like teachers trying to copy a link from their file browser. Well, bless them for trying.

[–] [email protected] 8 points 1 month ago

They seem so directionless lately, and by god is AI the wrong horse to bet on for their users.

I should check out LibreWolf...

[–] [email protected] 54 points 1 month ago* (last edited 1 month ago) (2 children)

OpenOffice was a really solid Microsoft Office rival, and FOSS to boot. Made by Sun Microsystems, of course, and then ruined by Oracle (of course).

Thankfully LibreOffice was forked from it and is still going strong as a very capable suite of document tools. And OpenOffice is basically dead, womp womp.

[–] [email protected] 7 points 1 month ago
[–] [email protected] 4 points 1 month ago

Funnily enough, Libre Office is another great example of this, being forked from Open Office (and also way better).

[–] [email protected] 2 points 1 month ago (1 children)

I mean, we've seen already that AI companies are forced to be reactive when people exploit loopholes in their models or some unexpected behavior occurs. Not that they aren't smart people, but these things are very hard to predict, and hard to fix once they go wrong.

Also, what do you mean by synthetic data? If it's made by AI, that's how collapse happens.

The problem with curated data is that you have to, well, curate it, and that's hard to do at scale. No longer do we have a few decades' worth of unpoisoned data to work with; the only way to guarantee training data isn't from its own model is to make it yourself

[–] [email protected] 3 points 1 month ago

Wow, it's amazing that just 3.3% of the training set coming from the same model can already start to mess it up.

[–] [email protected] 2 points 1 month ago

I've read some snippets of AI written books and it really does feel like my brain is short circuiting

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

At least in this case, we can be pretty confident that there's no higher function going on. It's true that AI models are a bit of a black box that can't really be examined to understand why exactly they produce the results they do, but they are still just a finite amount of data. The black box doesn't "think" any more than a river decides its course, though the eventual state of both is hard to predict or control. In the case of model collapse, we know exactly what's going on: the AI is repeating and amplifying the little mistakes it's made with each new generation. There's no mystery about that part, it's just that we lack the ability to directly tune those mistakes out of the model.

[–] [email protected] 3 points 1 month ago

I've had very few issues with whitespace in my decade or so of using python, especially since git and IDEs do a lot to standardize it. I'm a Python simp, tho

view more: next ›