Floey

joined 1 year ago
[–] [email protected] 2 points 3 weeks ago

Lord Kill The Pain

[–] [email protected] 13 points 1 month ago (1 children)

It takes a village to raise a child, not a "mother" and "father" specifically. I do not idolize the hetero nuclear family.

[–] [email protected] 25 points 1 month ago

Your internal gender didn't just fall out of a coconut tree.

[–] [email protected] 6 points 2 months ago

I wouldn't advocate for someone eating palm oil simply for their own personal health. However if you want to talk about the environment way more land is cleared for livestock than oil palm, even if you just focus on the locations where oil palm is grown. And palm oil is usually replacing animal fats in cooking due to it's saturated fat content, stuff like lard and ghee.

[–] [email protected] 1 points 2 months ago (1 children)

Something like Microsoft Word or Paint is not generative.

It is standard for publishers to make indemnity agreements with creatives who produce for them, because like I said, it's kinda difficult to prove plagiarism in the negative so a publisher doesn't want to take the risk of distributing works where originality cannot be verified.

I'm not arguing that we should change any laws, just that people should not use these tools for commercial purposes if the producers of these tools will not take liability, because if they refuse to do so their tools are very risky to use.

I don't see how my position affects the general public not using these tools, it's purely about the relationship between creatives and publishers using AI tools and what they should expect and demand.

[–] [email protected] 0 points 2 months ago (3 children)

Those analogies don't make any sense.

Anyway, as a publisher, if I cannot get OpenAI/ChatGPT to sign an indemnity agreement where they are at fault for plagiarism then their tool is effectively useless because it is really hard to determine something in not plagiarism. That makes ChatGPT pretty sus to use for creatives. So who is going to pay for it?

[–] [email protected] 19 points 2 months ago (12 children)

While I agree that using copyrighted material to train your model is not theft, text that model produces can very much be plagiarism and OpenAI should be on the hook when it occurs.

[–] [email protected] 6 points 2 months ago

It's not hypocritical to care about some parts of copyright and not others. For example most people in the foss crowd don't really care about using copyright to monetarily leverage being the sole distributor of a work but they do care about attribution.

[–] [email protected] 2 points 2 months ago

The brain doesn't do so well in isolation of stimulus for a long period of time.

[–] [email protected] 15 points 2 months ago

LLMs don't "know" anything. The true things they say are just as much bullshit as the falsehoods.

[–] [email protected] 1 points 2 months ago

Literally Bill Gates.

[–] [email protected] 2 points 2 months ago (1 children)
view more: next ›