this post was submitted on 02 Oct 2023
165 points (89.5% liked)

Technology

34894 readers
929 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (3 children)

This is exactly correct, except you’re also not accounting for the insane amount of computational power that would be necessary to backtrack a single output of a single model. This is why it is a black box. It simply is not possible on a meaningful level.

It's not practical, but it can be done. We simple don't have the time or inclination to do it.

It's like like saying we don't understand how an internal combustion engine works. Every explosion is a bit different, it pushes the pistons a bit less or more, it leaves a bit more or less residue in different places. We can't backtrack and check every cycle and every part on a meaningful level, but we understand of it works, and we could do it if we wanted to. It's just not practical.

As an example from my field: if you damage the dorsolateral prefrontal cortex in a fully grown adult, they will have the impulse control of a three-year old. We know this because we have observed damage to this area in multiple individuals, and can measure the effects based on the severity of that damage.

Okay so explain how sudden savant syndrome works. Step by step, biochemical process by biochemical process.

In contrast, if you provide the same billion-parameter neural network identical inputs, you will not receive identical outputs.

If you take the same model, put it in a VM, give it an input, get an output and the restore the VM to the exact same state before and ensure there's no randomness, the model will give you same output.

[–] [email protected] -1 points 1 year ago (2 children)

|it's not practical, but it can be done. We simple don't have the time or inclination to do it

Is also supposed to be true if the human mind.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Not really, because there's still some processes in the human brain we don't understand.

For example, you can list the steps and processes for every step an AI makes. You have to, in order to code and run it.

But you can't list every step or process taking place in cases of sudden savant syndrome, for example.

[–] [email protected] 0 points 1 year ago

That's not really the case for all ML systems, it the fact that programers can generate content that they themselves did not make, collect, or anticipate themselves by creating models that generate thier own decision trees based on input data.