Hacker News new | past | comments | ask | show | jobs | submit login

The vision system should have been able to range that.

Please understand that the vision system in a Tesla isn't like your vision system. There is no AI which is constructing a model of a 3D world out of 2D visual data, with a road surface and 3D objects located within it. There is no human or higher-mammal level of comprehension of the scene. There are probably a series of algorithmic tricks that enable the car to determine in which direction the distant road is. The computer can then meld that information with the other shorter ranged sensors in the car that do return distance data.

The reason why things like LIDAR are used in self-driving cars, is that these systems can numerically build a model of the 3D scene without having to have an AI reconstruct a 3D scene out of 2D camera data. They return distance information, so the data starts out as 3D, so far less interpretation is necessary. In all likelihood, nothing in a Tesla understands what a truck trailer is, so how is it going to interpret that set of 2D optical data as an object that's like a moveable wall suspended a few feet in the air? There's probably only a rudimentary notion of obstacle in the software.




Mobileye claims more than that. See their promotional video on rear-end collision avoidance. [1] See them displaying distance to target. You can buy this as a smartphone app (!), or as a retrofit.[2]

Here's a long theory talk by Mobileye's CTO and co-founder.[3]

[1] https://www.youtube.com/watch?v=HXpiyLUEOOY

[2] https://www.youtube.com/watch?v=kL-gatAmwhw

[3] https://www.youtube.com/watch?v=GZa9SlMHhQc


Interestingly, from elsewhere on the HN frontpage right now: "Mobileye split with Tesla spurs debate on self-driving tech" http://www.reuters.com/article/us-mobileye-tesla-idUSKCN1061...


Obviously, they are working on interpreting camera data to 3D and they are not finished yet. They did specifically state that their software isn't ready to detect the crossing semi trailer. (Not until 2018)


I'm highly skeptical that they'll ever be able to differentiate extremely similar targets, at distance, with a relatively cheap camera, working in the visible spectrum. I'd love to hear their tricks to getting that to work. In the meantime, people creating actual driverless cars are using LiDAR for a reason. 20Hz, full 360 degree view with a real range to target measurement, not an algorithmic estimate.


I wonder if Elon will ever live down his comments that LIDAR "doesn’t make sense" and is "unnecessary" in the context of an autonomous car after this[1].

1. http://9to5google.com/2015/10/16/elon-musk-says-that-the-lid...


Musk's reasoning is as follows: computer vision is getting better and cameras will always be cheap. We use LIDAR today because computer vision isn't that great.

Although, LIDARs and sensors that do the same thing as LIDAR are getting cheaper. One thing that might change the game is the development of sensors that don't require mechanical scanners. DARPA recently demonstrated a non-mechanical way to scan a laser beam very fast and mm-wave radar is starting to approach the capabilities of LIDAR[1].

[0]http://www.businesswire.com/news/home/20131014006233/en/Pana... [1]http://www.businesswire.com/news/home/20131014006233/en/Pana...


That remark will probably come up in the lawsuit by the dead driver's survivors. Probably in the context of "gross negligence".


LIDAR doesn't work in rain and fog.


LIDAR can work quite well in rain and fog with proper processing. There are range gated imagers for that.[1] You tell the imager to ignore anything for a delay of N nanoseconds, and you don't see the fog reflections out to 2N feet. You can run the range gate in and out until you see through the rain and fog. These are available as hand-held devices.

This technology was used on ships in fog back in 2004, but now that it's down to hand-held size, it seems to be more of a military thing.

There are lots of interesting things you can do with LIDAR that the Velodyne people don't do. "First and last", for example. But enough for tonight.

[1] http://www.sensorsinc.com/applications/military/laser-range-...


I'm not being a proponent of LIDAR. I'm trying to explain the difference between interpreting 2D visual data as 3D and what a device like LIDAR does. RADAR does work in rain and fog.


Looking through rain with LIDAR is like looking through chaff for radar. An X-ray machine wouldn't work very well if there was a cloud of lead dust in the air.


Does any current optical technology work any better in rain and fog? Not that 'optical' is a requirement anyway - what is needed is anything that does work.


The radars other manufacturers use work well - e.g. my Mazda will detect obstacles and start braking even in weather conditions where optical visibility is way worse. Pretty much all other car manufacturers use such radars mounted in front for the adaptive cruise control systems (front) and blind spot monitoring (back).



This is definitely a cost to benefit calculation. LIDAR is not cheap at the moment. In years to come we may have the perspective that LIDAR is necessary but at the moment it doesn't seem worth it.


A Tesla isn't cheap either, and Tesla didn't have to introduce their beta-level software/hardware to the public and then claim "you're using it wrong" when some guy's head gets sheared off by a semi.


I hope this is intended as sarcasm.


No one suggested that it needs an intimate understanding of trucks. It needs to be able to tell the difference between empty space and not empty space. It needs to be able to do that for the entire volume the vehicle will occupy not just some of it. Otherwise it will run into trees, wires strung up to decapitate motorcyclist[1], farm animals and other obstacles that may or may not extend all the way to the ground directly in front of the sensor.

[1] Unfortunately that is a thing. http://lanesplitter.jalopnik.com/police-hunting-sadistic-bas...


No one suggested that it needs an intimate understanding of trucks. It needs to be able to tell the difference between empty space and not empty space.

To do that, it has to understand the truck as a light colored rectangular prism with corrugated metal sides, suspended a few feet off the road surface by other structures. (wheels) I don't mean that the Tesla has to understand trucks and interstate trucking. I just mean that it understands it's a certain kind of object that's an obstruction. Doing this from an image isn't trivial. That's why LIDAR is so often used.


This is from last years conference, running on a laptop in real time: https://www.youtube.com/watch?v=oJt3Ln8H03s

'computer tricks' are already here with full 3d reconstruction in real time


The demo in the YouTube video is using stereo cameras. Does a Tesla have stereo cameras? Also, it's one thing to have something that can infer distance in a demo. It's another thing entirely to have it operate with the kind of reliability you'd need for deployment as a consumer car autopilot.


Or single camera, including neighborhood street stroll:

https://youtu.be/GnuQzP3gty4


Agreed. If you've ever seen the sausage making behind the scenes that goes into a viable looking demo...


Hell, I've actually manipulated the database locks of a web app server by hand behind the scenes during a demo!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: