Sekoia

joined 1 year ago
[–] [email protected] 13 points 10 months ago (1 children)

For the screenshot you might want to use a terminal that doesn't have bloom, a CRT filter, and a background, I genuinely can't see the TUI.

[–] [email protected] 2 points 10 months ago

Lol I didn't get the reference before

(There was a post about Switzerland considering legalizing cocaine cus they have so much and it's so pure & common, apparently)

[–] [email protected] 47 points 11 months ago (12 children)

Uh. Buddy. They absolutely are known for building a shitload of trains. There's the Gottard, which is the longest tunnel through a mountain, and I think also the steepest railtracks in the world?

You've never heard of swiss trains always being on time?

[–] [email protected] 38 points 11 months ago (5 children)

This is a really solid explanation of how studies finding human behavior in LLMs don't mean much; humans project meaning.

[–] [email protected] 14 points 11 months ago (1 children)

Neural networks are named like that because they're based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn't resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.

We now know this model is... effectively completely wrong.

Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a "self-attention" layer. You can't say LLMs work like a brain, because they don't. The rest is debatable but it's important to remember that there are billions of dollars of value in selling the dream of conscious AI.

[–] [email protected] 14 points 11 months ago (5 children)

Nah. Programming is... really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.

[–] [email protected] 2 points 1 year ago

I'm not aware of any word like that

[–] [email protected] 2 points 1 year ago (2 children)

"The ranges experienced by humans" is extremely variable. My friends from hotter countries can barely handle 10°C, but are fine at 40°C, and it's entirely the opposite for me.

I assure you that for regular use, Celsius works great. I don't really think either is better than the other in practice (outside of chemistry), but "it's the range people experience" is kinda bull. A 10 degree F difference from 0 to 10 is very different from 60 to 70.

Also, water freezing at 0°C (and boiling at 100°C, to a lesser degree) is quite convenient in everyday life. Just check for a minus sign and you know if it can freeze.

[–] [email protected] 1 points 1 year ago

Yeah, but the bridge is correctly over the river and the buildings aren't really merged. Tough though.

The second one got me tho

[–] [email protected] 1 points 1 year ago (2 children)

Sure, it's not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it's not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).

[–] [email protected] 0 points 1 year ago (4 children)

Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you're just exploiting labor without consent.

(Also if you're just making random images for yourself, w/e)

((Also also, text models are a separate debate and imo much worse considering they're literally misinformation generators))

Note: if anybody wants to reply with "actually AI models learn like people so it's fine", please don't. No they don't. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.

[–] [email protected] 1 points 1 year ago

To be honest, I don't think it's worth the bother. This is just an i3-5 something, and I got all the working parts off of it. But it's good a good idea, thanks!

view more: ‹ prev next ›