this post was submitted on 19 Jan 2024
384 points (98.2% liked)

Technology

59148 readers
2352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT's new AI store is struggling to keep a lid on all the AI girlfriends::OpenAI: 'We also don’t allow GPTs dedicated to fostering romantic companionship'

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 33 points 9 months ago (26 children)

why? why not let people just retreat into fantasy? it's probably healthier than many common coping mechanisms. i mean, it's a chatbot, how much can you do with it?

let people have their temporary salve to get them thru whatever they were going thru such that they were resorting to this. and if it's not temporary, ok, fine? better to have some outlet than be even more mentally isolated. maybe in 50 years this will be common, who knows.

[–] [email protected] 55 points 9 months ago (4 children)

Liability. Imagine an AI girlfriend who slowly earns your affection, then at some point manipulates you into sending bitcoins to a prespecified wallet set up by the model maker. Because models are black boxes, there is no way to verify by direct inspection that an AI hasn't been trained with an ulterior agenda (the "execute order 66" problem).

[–] [email protected] 5 points 9 months ago (1 children)

Yep, I was having a conversation with a guy that informs policy makers on ai, he had given a whole presentation to a school board meeting I went to a few nights ago.

He said that's his highest recommendation when it comes to what should be done on the lawmaker side, pass bills that push for opening up those black boxes so we can ensure transparency.

[–] [email protected] 9 points 9 months ago (1 children)

Problem is, there isn't a way to open up the black boxes. It's the AI explainability problem. Even if you have the model weights, you can't predict what they will do without running the model, and you can't definitively verify that the model was trained as the model maker claimed.

[–] [email protected] 1 points 9 months ago

I see, my knowledge is surface deep so I admit this is new information to me.

Is there no way to ensure LLMs are safe for like kids to use as a tool for education? Or is it just inherently going to come with some risk of exploitation and we just have to do our best to educate students of that danger?

load more comments (2 replies)
load more comments (23 replies)