this post was submitted on 03 Sep 2023
207 points (89.1% liked)

Technology

59421 readers
3519 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 39 points 1 year ago (10 children)

When a service is willing to take responsibility for collisions and driving violations, then we know it works. If the guy asleep at the wheel (which he allegedly can do in an autonomous car) is still the one held responsible, then were not there yet.

That said end-to-end AI totally sounds like equivocal marketing buzz.

[–] [email protected] 1 points 1 year ago (7 children)

I wonder what happens when the car is on a collision course with a golden retriever and the only way not to hit it would be to damage the car. Or same scenario, but the only way not to hit it, is it to hit an 07 Carolla parked on the side of the road. Not saying humans have superior judgement... just wondering if it will be programmed by the theory of actuarial of philosophical science.

[–] [email protected] 3 points 1 year ago (3 children)

That makes me think- will the AI see a kid that's about to run out from behind a parked car? As a human, if I see a kid run from the house into a row of parked cars, I know he's still there and will slow down before I get there. But would self driving make that same leap of logic? I'm not sure what the range and capabilities of self driving cars are right now in terms of scanning, but hopefully it would be smart enough to take preventative measures

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Good question. Neural networks are modelled after how brains learn and process information, so it's certainly theoretically possible for a neural network (or other machine learning algorithm) to make inferences like that, just like how you've learned them with years of experience.

The biggest challenge in any machine learning is finding enough labelled training data. In fact, a friend of mine contributed to a paper in which (no joke) GTA V was used to generate labelled training data for an automous vehicle. Because it's a game engine, every object in the game is already digitized, and the 3D modelling is accurate enough to be useful, at least. This vehicle used LIDAR so the actual shaders and such didn't matter as much as the 3D point cloud.

load more comments (2 replies)
load more comments (5 replies)
load more comments (7 replies)