UnpluggedFridge

joined 8 months ago
[–] [email protected] 7 points 4 months ago

X lost half a billion dollars in the first quarter of 2023. Odd that the financial expert didn't mention this even though it is literally in the same sentence as the "40% drop in revenue" statement in the article.

[–] [email protected] 2 points 5 months ago (3 children)

We probably don't want to use the current leader in cause of death for kids as a template for good policy.

[–] [email protected] 2 points 5 months ago

If you do the search I suggested you will find relevant reviews immediately. If you add keywords based on my post text you will find the primary sources immediately.

[–] [email protected] 2 points 5 months ago (1 children)

https://www.cdc.gov/mmwr/volumes/66/wr/mm6630a6.htm

Teenage suicide rates were declining for over a decade, especially in males. Now they are increasing in both males and females. You would have to be a complete monster to not want to study, understand, and reverse this trend.

[–] [email protected] 9 points 5 months ago (6 children)

Go to pubmed. Type "social media mental health". Read the studies, or the reviews if you don't have the time.

The average American teenager spends 4.8 hours/day on social media. Increased use of social media is associated with increased rates of depression, eating disorders, body image dissatisfaction, and externalizing problems. These studies don't show causation, but guess what, we literally cannot show causation in most human studies because of ethics.

Social media drastically alters peer interactions, with negative interactions (bullying) associated with increased rates of self harm, suicide, internalizing and externalizing problems.

Mobile phone use alone is associated with sleep disruption and daytime sleepiness.

Looking forward to your peer-reviewed critiques of these studies claiming they are all "just vibes."

[–] [email protected] 2 points 5 months ago (5 children)

What do you mean by work? Do they stop everyone from doing stupid things? No. Do they have a measurable effect on behavior? Yes.

[–] [email protected] 4 points 5 months ago (2 children)

I remember hearing this argument before...about the Internet. Glad that fad went away.

As it has always been, these technologies are being used to push us forward by teams of underpaid unnamed researchers with no interest in profit. Meanwhile you focus on the scammers and capitalists and unload your wallets to them, all while complaining about the lack of progress as measured by the products you see in advertisements.

Luckily, when you get that cancer diagnosis or your child is born with some rare disease, that progress will attend to your needs despite your ignorance if it.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.

[–] [email protected] 0 points 5 months ago (2 children)

Nor can we assume that they cannot have the same emergent properties.

[–] [email protected] 41 points 5 months ago (8 children)

These cases are interesting tests of our first amendment rights. "Real" CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for "real" images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as "real," we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

[–] [email protected] 1 points 5 months ago

We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.

view more: next ›