Email or username:

Password:

Forgot your password?
Top-level
Ivan Sagalaev :flag_wbw:

@inthehands I think their vision layer is okay. It can reliably identify and classify objects and their placement. It's what to do with this information that has always been the problem: you've got this car over there moving that way and that car standing over here. What input you apply to pedals and the steering wheel? This part turned out to be harder than vision. And now they're trying to solve it with AI as well. Which just swaps one set of edge case for another and can't be debugged.

2 comments
Paul Cantrell

@isagalaev At least some of the embarrassing Tesla self-driving fails I’ve seen in videos online are situations where cross-checking multiple forms of input (radar, map, etc) would probably have helped a lot.

Ivan Sagalaev :flag_wbw:

@inthehands I own one, and I can tell when they switched from radar to vision to determine obstacle in front of the car, it became much smoother and reliable. Radar is too low-res and just results in noisy signal you can't rely on.

Another thing that's missing is memory. Musk likes to talk about human eyes as sensors, but we also rely on memory *a lot*. After through a turn a few times, a human is much better at predicting behavior. But Tesla goes through every interaction with tabula rasa.

Go Up