xhieron

joined 1 year ago
[–] [email protected] 2 points 2 months ago

That's a very large assumption. The simplest explanation is that we feel like we have free will because we do. Quantum mechanics suggests some major challenges to determinism, and the best arguments to restore it require a very unsatisfying amount of magical thinking.

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago) (2 children)

It's a broad generalization, but it's not really a matter of opinion. We can scan people's mouths and faces when they talk (and have in order to demonstrate this stuff). I think the last example probably only applies that way in particular circumstances though, since English speakers automatically group, contract, and arrange certain phonemes in certain orders (e.g., I'm not, I ain't, but never I amn't--and in real speech "I ain't" is almost always one syllable). In this example, more frequently my country ass contracts the first syllable of "gonna" away instead of the second, so "I'm 'na head to the store; y'all need anything?"

The hot potato example just stands for the premise that in real speech the t at the end of hot and the p at the beginning of potato slur together, and if you deliberately enunciate both consonants, you sound like you're reading to a transcriber. Compare the way a normal person says "let's go" to the way you sound if you force separate the words: you sound like you're doing a Mario impression.

[–] [email protected] 4 points 3 months ago

Windows 10 LTSC 2021 ends support in 2027 (although it doesn't matter quite as much). And it's likely that the Win 11 LTSC later this year will necessarily be free from much of 11's bullshit. Linux is still the right call, but for those of us who need to run a Windows machine for whatever reason, there are alternatives, so, you know... yarr.

[–] [email protected] 3 points 4 months ago

That was the point... Did you reply to the wrong comment?

[–] [email protected] 2 points 4 months ago

That's flattering, but I was actually just expecting a press release. So where is it?

[–] [email protected] 50 points 4 months ago (14 children)

She sure can't. Sounds like all OpenAI has to do is produce the voice actor they used.

So where is she? ...

Right.

[–] [email protected] 22 points 4 months ago

AI = Absent Indians

[–] [email protected] 4 points 4 months ago

Yeah yeah, we get it, nothing's as good as crack. You don't have to rub it in.

[–] [email protected] 2 points 4 months ago
[–] [email protected] 5 points 4 months ago

Thank you for this. That was a fantastic survey of some non-materialistic perspectives on consciousness. I have no idea what future research might reveal, but it's refreshing to see that there are people who are both very interested in the questions and also committed to the scientific method.

[–] [email protected] 4 points 5 months ago

SO close. Just another five or ten seconds to finish the whole license. I would love to see someone cover this thing and tie it off.

[–] [email protected] 5 points 5 months ago

And you're absolutely right about that. That's not the same thing as LLMs being incapable of constituting anything written in a novel way, but that they will readily with very little prodding regurgitate complete works verbatim is definitely a problem. That's not a remix. That's publishing the same track and slapping your name on it. Doing it two bars at a time doesn't make it better.

It's so easy to get ChatGPT, for example, to regurgitate its training data that you could do it by accident (at least until someone published it last year). But, the critics cry, you're using ChatGPT in an unintended way. And indeed, exploiting ChatGPT to reveal its training data is a lot like lobotomizing a patient or torture victim to get them to reveal where they learned something, but that really betrays that these models don't actually think at all. They don't actually contribute anything of their own; they simply have such a large volume of data to reorganize that it's (by design) impossible to divine which source is being plagiarised at any given token.

Add to that the fact that every regulatory body confronted with the question of LLM creativity has so far decided that humans, and only humans, are capable of creativity, at least so far as our ordered societies will recognize. By legal definition, ChatGPT cannot transform (term of art) a work. Only a human can do that.

It doesn't really matter how an LLM does what it does. You don't need to open the black box to know that it's a plagiarism machine, because plagiarism doesn't depend on methods (or sophisticated mental gymnastics); it depends on content. It doesn't matter whether you intended the work to be transformative: if you repeated the work verbatim, you plagiarized it. It's already been demonstrated that an LLM, by definition, will repeat its training data a non-zero portion of the time. In small chunks that's indistinguishable, arguably, from the way a real mind might handle language, but in large chunks it's always plagiarism, because an LLM does not think and cannot "remix". A DJ can make a mashup; an AI, at least as of today, cannot. The question isn't whether the LLM spits out training data; the question is the extent to which we're willing to accept some amount of plagiarism in exchange for the utility of the tool.

view more: next ›