this post was submitted on 23 Sep 2023
337 points (92.9% liked)
Technology
59390 readers
2896 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Super hot and spicy take incoming: AI will be able to make very realistic child porn and we might actually see huge drop in sexual child abuse.
I hate to even type that sentence, btw.
It already is being used to make CSAM. I work for a hosting provider and just the other day we closed an account because they were intentionally hosting AI generated CSAM.
Welp that's horrifying
Can I ask why AI generated media is considered CSAM if there are no victims? I don't like furry porn but it's not beastiality. I don't like loli shit but it's not CP (well technically it is but it's not real kids is my point). How is it any different?
Is it gross? Obviously, but I'm biased as I don't like kiddie shit but no one is getting hurt and if it helps reduce sexual abuse cases against kids, why wouldn't you be in favor of it?
I don't understand how this is unreasonable. If AI generated CP increased the stats of kids being harmed then I'd be vehemently opposed. I know it's a touchy subject but you can't just write it off if it works for the greater good, no?
The report came from a (non-US) government agency. It wasn't reported as AI generated, that was what we discovered.
But it highlights the reality - while AI generated content may be considered fairly obvious for now, it won't be forever. Real CSAM could be mixed in at some point, or, hell, the characters generating it could be feeding it real CSAM to have it recreate it in a manner that makes it harder to detect.
So what does this mean for hosting providers? We continuously receive reports for a client and each time we have to review it and what, use our best judgement to decide if it's AI generated? We add the client to a list and ignore CSAM reports for them? We have to tell the government that it's not "real CSAM" and expect it to end there?
No legitimate hosting provider is going to knowingly host CSAM, AI generated or not. We aren't going to invest legal resources into defending that, nor are we going to jeopardize the mental well-being of our staff by increasing the frequency of those reports.
Very true and I would like to look into it further. Being able to disguise real content with an AI label could make things harder for people that detect and report these types of issues.
I don't understand the logic behind this. If it's your job to analyze and deduce whether certain content is or is not acceptable, why shouldn't you make assessments on a case by case basis? Even if you remove CSAM from the equation you still have to continuously sift through content and report any and all illegal activities - regardless of its frequency.
And it's the right of any website or hosting provider to not show any content they deem unsuitable for it's viewers. But this is a non sequitur - illegal activities will never stop and it's the duty of people like you to help and combat the distribution of such materials. I appreciate all the work people like you do and it's a job I couldn't handle. CP exists and will continue to exist. It's just an ugly truth. I'm just asking a very uncomfortable question that will hopefully result in a very positive answer: can AI generated CP reduce the harm done to children?
Here's a very interesting article of the potential positive effects of AI generated CP
Btw I appreciate your input in all of this. It means a lot coming from someone actually involved with this sort of thing.
Edit: and to your point, the article ends with a very real warning:
"Of course, using AI-generated images as a form of rehabilitation, alongside existing forms of therapy and treatment, is not the same as allowing its unbridled proliferation on the web.
“There’s a world of difference between the potential use of this content in controlled psychiatric settings versus what we’re describing here, which is just, anybody can access these tools to create anything that they want in any setting,” said Portnoff, from Thorn."
The bit about "ignoring it" was more in jest. We do review each report and handle it in a case by case basis, my point with this statement is that someone hosting questionable content is going to generate alot of reports, regardless of whether it is illegal or not, and we won't take an operating loss and let them keep hosting with us.
Usually we try and determine if it was intentional or not, if someone is hosting CSAM and is quick and responsive with resolving the issue, we generally won't immediately terminate them for it. But even if they (our client) is a victim, we are not required to host for them and after a certain point we will terminate them.
So when we receive a complaint about a user hosting CSAM, we review it and see they are hosting a site advertising itself as intended to allow users to distribute AI generated CP, we aren't going to let him continue hosting with us.
This is not an accurate statement, at least in the U.S. where we are based. We are not (yet) required to sift through any and all content uploaded on our servers (not to mention the complexity of such an undertaking making it virtually impossible at our level). There have been a few laws proposed that would have changed that, as we've seen in the news from time to time. We are required to handle reports we receive about our clients.
Keep in mind when I say we are a hosting provider, I'm referring to pretty high up the chain - we provide hosting to clients that would say, host a Lemmy instance, or a Discord bot, or a personal NextCloud server, to name a few examples. A common dynamic is how much abuse is your hosting provider willing to put up with, and if you recall with the CSAM attacks on Lemmy instances part of the discussion was risking getting their servers shutdown.
Which is valid, hosting providers will only put up with so much risk to their infrastructure, reputation, and / or staff. Which is why people who run sites like Lemmy or image hosting services do usually want to take an active role in preventing abuse - whether or not they are legally liable won't matter when we pull the plug because they are causing us an operating loss.
I'm just going to reply to the rest of your statement down here, I think I did not make my intent/purpose clear enough. I originally replied to your statement talking about AI being used to make CP in the future by providing a personal anecdote about it already happening. To which you asked a question as to why I defined AI generated CP as CSAM, and I clarified. I wasn't actually responding to the rest of that message. I was not touching the topic or discussion of what impact it might have on the actual abuse of children, merely providing my opinion as to why, whether legal or not, hosting providers aren't ever going to host that content.
The content will be hosted either way, but whether it is merely relegated to "offshore" providers but still accessible via normal means and not criminal content, or becomes another part of the dark web, will be determine at some point in the future. It hasn't become a huge issue yet but it is rapidly approaching that point.
The fact that it can make CP at all is the reason why it needs to be banned outright.
EDIT: Counting ~~8 9 10~~ 11 butthurt pedophiles afraid their new CP source will be banned
Nah, just hook it up to some predator drones and build a pedo hunting skynet.
No one is butthurt. I have no interest in CP (thank fucking god) but if it means people get their rocks off at home without *hurting any kids then I'm all for it.
What's interesting is you have a strong disdain for fake porn but no real argument against it other than "heeeyuck kiddy porn bad aaahheeeyuuck". 😂
Edit: no real arguments and just downvotes? Seems like a typical facts vs feelings argument ¯_(ツ)_/¯
Did you make that for me?
I'm actually flattered! 😂
Why's that? There are no children being hurt.
Funny how just bringing up a solution that, although uncomfortable, reduces the cases of sexual abuse against kids without any victims gets you branded as a pedo.
I just want kids to stop getting abused lol