this post was submitted on 22 Dec 2023
118 points (91.0% liked)
Technology
59374 readers
3463 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No ethical AI without explainable AI
People are able to explain themselves, and some AI also can, with similar poor results.
I'm reminded of one of Azimov's stories about a robot whose job was to aim an energy beam at a collector on Earth.
Upon talking to the robot, they realized that it was less of a job to the robot and more of a religion.
The inspector freaked out because this meant that the robot wasn't performing to specs.
Spoilers: Eventually they realized that the robot was doing the job either way, and they just let it do it for whatever reason.
Can they though? Sure, they can come up with some reasonable sounding justification, but how many people truly know themselves well enough to have that be accurate? Is it any more accurate than asking gpt to explain itself?
I did say that people and AI would have similar poor results at explaining themselves. So we agree on that.
The one thing I'll add is that certain people performing certain tasks can be excellent at explaining themselves, and if a specific LLM AI exists that can do that, then I'm not aware of it. I added LLM into there because I want to ensure that it's an AI with some ability for generalized knowledge. I wouldn't be surprised if there are very specific AIs that have been trained only to explain a very narrow thing.
I guess I'm in a mood to be reminded of old Science Fiction stories, because I'm reminded of a story where they had people who were trained to memorize situations to testify later. For some reason, I initially think it's a hugely famous novel like Stranger in a Strange Land, but I might easily be wrong. But anyways, the example they gave in the book was that the person described a house, let's say the house was white, then they described it as being white on the side that was facing them. The point being that they'd be explaining something as closely to right as was possible, to the point that there was no way that they'd be even partially wrong.
Anyways, that seems tangentially related at best, but the underlying connection is that people, with the right training and motivation, can be very mentally disciplined, which is unlike any AI that I know, and also probably very unlike this comment.
At least to me the exciting part is that we're getting to a point where this is a legitimate question - regarding both us and our emerging AI's.
I wouldn't be surprised at all if an AI explained its own behaviour better before we can adequately understand our own minds well enough to match that logic - there's a lot we don't know about our own decision making processes.
technically correct
Yeah, it's fascinating technology but also too many smokes and mirrors to trust any of the AI salesman since they can't explain themselves exactly how it makes decisions.