this post was submitted on 11 Oct 2024
25 points (60.9% liked)
Technology
60033 readers
2986 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
a lidar could tell the difference between a person on a bus billboard and a person. it brings 3d to a 2d party.
A lidar alone can't do that. It'll just build a 3D point cloud. You still need software to detect the individual objects in there and that's easier said than done. So far Tesla seems to be achieving this just fine by using cameras alone. Human eyes can tell the difference between an actual person and a picture of a person too. I don't see how this is supposed to be somethin you can't do with just cameras.
Funny, last I heard, Tesla FSD has a tendency to run into motorcycles.
With lidar there would be no doubt that there is an actual object, and obviously you don't drive into it.
are the tesla cameras 3D?
No, and neither are your eyes, but you can still see the world in 3D.
You can use normal cameras to create 3D images by placing two cameras next to each other and creating a stereogram. Alternatively, you can do this with just one camera by taking a photo, moving it slightly, and then taking another photo - exactly what cameras in a moving vehicle are doing all the time. Objects closer to the camera move differently than the background. If you have a billboard with a person on it, the background in that picture moves differently relative to the person than the background behind an actual person would.
That's a grossly misleading statement.
We definitely use 2 eyes to achieve a 3D image with depth perception.
So the question is obviously whether Tesla does the same with their Camera AI for FSD.
IDK if they do, but if they do, they apparently do it poorly. Because FSD has a history of driving into things that are obviously (for a human) in front of it.
Talk about making a difficult problem (self-driving) more difficult to solve by solving another hard problem.
Just slapping on a lidar doesn't simply solve that issue for you either. Making out individual objects from the point cloud data is equally difficult plus you're then having to deal with cameras too because Waymo has both. I don't see how you imagine having Lidar and cameras would be easier to deal with than just cameras.
Also. Tesla already has more or less solved this issue. FSD works just fine with cameras only and new HW4 models have radar too.
Human eyes can't do it alone either. The brain has to process the information delivered.