“We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights,” Joanne Jang, a member of the product team at OpenAI, told NPR. “There are creative cases in which content involving sexuality or nudity is important to our users.”
The other problem in my mind is the fallibility of current safeguards. OpenAI and rivals have been refining their filtering and moderation tools for years. But users constantly discover workarounds that enable them to abuse the companies’ AI models, apps and platforms.
Some highlights from the article.
It seems like AI porn is inevitable and OpenAI has safeguards in mind for exploitative content so it doesn't seem like a horrendous idea.