this post was submitted on 22 Dec 2023
118 points (91.0% liked)

Technology

59374 readers
3250 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 10 months ago (2 children)

people are able to explain themselves

Can they though? Sure, they can come up with some reasonable sounding justification, but how many people truly know themselves well enough to have that be accurate? Is it any more accurate than asking gpt to explain itself?

[–] [email protected] 2 points 10 months ago

I did say that people and AI would have similar poor results at explaining themselves. So we agree on that.

The one thing I'll add is that certain people performing certain tasks can be excellent at explaining themselves, and if a specific LLM AI exists that can do that, then I'm not aware of it. I added LLM into there because I want to ensure that it's an AI with some ability for generalized knowledge. I wouldn't be surprised if there are very specific AIs that have been trained only to explain a very narrow thing.

I guess I'm in a mood to be reminded of old Science Fiction stories, because I'm reminded of a story where they had people who were trained to memorize situations to testify later. For some reason, I initially think it's a hugely famous novel like Stranger in a Strange Land, but I might easily be wrong. But anyways, the example they gave in the book was that the person described a house, let's say the house was white, then they described it as being white on the side that was facing them. The point being that they'd be explaining something as closely to right as was possible, to the point that there was no way that they'd be even partially wrong.

Anyways, that seems tangentially related at best, but the underlying connection is that people, with the right training and motivation, can be very mentally disciplined, which is unlike any AI that I know, and also probably very unlike this comment.

[–] [email protected] 1 points 10 months ago

At least to me the exciting part is that we're getting to a point where this is a legitimate question - regarding both us and our emerging AI's.

I wouldn't be surprised at all if an AI explained its own behaviour better before we can adequately understand our own minds well enough to match that logic - there's a lot we don't know about our own decision making processes.