admin

joined 1 year ago
[–] [email protected] 3 points 4 months ago

And that's the third time you've tried to put words into my mouth, rather than arguing my points directly.

Have fun battling your straw men, I'm out.

[–] [email protected] 4 points 4 months ago* (last edited 4 months ago) (3 children)

you're wanting to give people the right to control other people's ability to analyze the things that they see on public display.

For the second time, that's not what I want to do - I pretty much said so explicitly with my example.

Human studying a piece of content - fine.
Training a Machine Learning model on that content without the creator's permission - not fine.

But if you honestly think that a human learning something, and a ML model learning something are exactly the same, and should be treated as such, this conversation is pointless.

[–] [email protected] 4 points 4 months ago (5 children)

No, Just the concept of getting a say in who can train AIs on your creations.

So yes, that would leave room for a loophole where a human could recreate your creation (without just making a copy), and they could then train their model on that. It isn't water tight. But it doesn't need to be, just better than what we have now.

[–] [email protected] 15 points 4 months ago (7 children)

Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.

The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.

[–] [email protected] 25 points 4 months ago (6 children)

Well, just a month ago they couldn't pay out a bounty to Kaspersky for a 0day exploit they found due to the sanctions, so this seems a little off.

[–] [email protected] 1 points 4 months ago

Who knows, maybe it'll teach people to be more skeptical of the things they read online, and actually look for the underlying sources.

[–] [email protected] 1 points 4 months ago

But why wouldn't those same limits not apply to biological controllers? A neuron is basically a transistor.

[–] [email protected] 2 points 4 months ago

Because people don't read articles, and this way OP can still get the rage-engagement

In a way, that's what I'm contributing to now as well. So that's why I'm not even going to

[–] [email protected] 8 points 4 months ago* (last edited 4 months ago) (5 children)

I'd wager the main reason we can't prove or disprove that, is because we have no strict definition of intelligence or sentience to begin with.

For that matter, computers have many more transistors and are already capable of mimicking human emotions - how ethical is that, and why does it differ from bio-based controllers?

[–] [email protected] 5 points 4 months ago

The line has been changed to be gender neutral 9 hours ago. Victory!

view more: ‹ prev next ›