lightstream

joined 4 years ago
[–] [email protected] 2 points 2 months ago

I taught myself to touch-type when I was a schoolkid using something similar to Mavis Beacon. All the while, I had a voice in my head saying, "This is pointless, everyone will be talking to their computers like in Star Trek in a couple of years". Well, that was the 90s and it turned out to be one of the most useful skills I taught myself - but surely the age of the keyboard must soon be coming to an end now??

[–] [email protected] 24 points 2 months ago

Eh, that’s pretty metal.

It's definitely pretty, and as thermite is a mixture of metal powder and metal oxide, your statement is entirely correct.

[–] [email protected] 3 points 2 months ago

Imagine life in the post-apocalyptic hellscape. All electronic devices have been rendered useless due to the EMPs from all the nuclear blasts. You, with your unfathomable ability to tell the time from an old wind-up clock, are viewed as a literal god among men (and women)

[–] [email protected] 14 points 2 months ago

ah they were making a nice and lame pun (anova brand == another brand)

[–] [email protected] 1 points 7 months ago (1 children)

They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

[–] [email protected] 1 points 7 months ago (3 children)

It models only use of language

This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence "The stolen painting was found by a tree", you need to know what a tree is in order to interpret this correctly.

You can't really use language *unless* you have a model of the universe.

[–] [email protected] 1 points 7 months ago (1 children)

they, in fact, will have some understanding

These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.

What they do not have (and IMO won't ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It's very illuminating on the subject of what consciousness is, by providing a new example of what it is not.

[–] [email protected] 2 points 7 months ago (5 children)

They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

[–] [email protected] 2 points 1 year ago (5 children)

privacy on that site was horrible, and I stoped de-selecting vendors who want permission to track me after two minutes.

Just open the page in a private window at that point, and click the "yeah sure track everything bro" button.