Lol I didn't get the reference before
(There was a post about Switzerland considering legalizing cocaine cus they have so much and it's so pure & common, apparently)
Lol I didn't get the reference before
(There was a post about Switzerland considering legalizing cocaine cus they have so much and it's so pure & common, apparently)
Uh. Buddy. They absolutely are known for building a shitload of trains. There's the Gottard, which is the longest tunnel through a mountain, and I think also the steepest railtracks in the world?
You've never heard of swiss trains always being on time?
This is a really solid explanation of how studies finding human behavior in LLMs don't mean much; humans project meaning.
Neural networks are named like that because they're based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn't resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.
We now know this model is... effectively completely wrong.
Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a "self-attention" layer. You can't say LLMs work like a brain, because they don't. The rest is debatable but it's important to remember that there are billions of dollars of value in selling the dream of conscious AI.
Nah. Programming is... really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
I'm not aware of any word like that
"The ranges experienced by humans" is extremely variable. My friends from hotter countries can barely handle 10°C, but are fine at 40°C, and it's entirely the opposite for me.
I assure you that for regular use, Celsius works great. I don't really think either is better than the other in practice (outside of chemistry), but "it's the range people experience" is kinda bull. A 10 degree F difference from 0 to 10 is very different from 60 to 70.
Also, water freezing at 0°C (and boiling at 100°C, to a lesser degree) is quite convenient in everyday life. Just check for a minus sign and you know if it can freeze.
Yeah, but the bridge is correctly over the river and the buildings aren't really merged. Tough though.
The second one got me tho
Sure, it's not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it's not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).
Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you're just exploiting labor without consent.
(Also if you're just making random images for yourself, w/e)
((Also also, text models are a separate debate and imo much worse considering they're literally misinformation generators))
Note: if anybody wants to reply with "actually AI models learn like people so it's fine", please don't. No they don't. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.
To be honest, I don't think it's worth the bother. This is just an i3-5 something, and I got all the working parts off of it. But it's good a good idea, thanks!
For the screenshot you might want to use a terminal that doesn't have bloom, a CRT filter, and a background, I genuinely can't see the TUI.