Hacker News new | past | comments | ask | show | jobs | submit login

> The side of the trailer is corrugated metal painted white. It's not a smooth white surface. The vision system should have been able to range that.

I'm not sure if the vision system is smart enough to be used for collision detection. The radar and ultrasonics are used to detect vehicles, but I believe the camera is just used to help follow the lines painted on the road.




If that's true, then this crash certainly points out the value of integrating those systems to better identify potential collisions.


Some years back, some federal organization made new rules for semi-trailers with regard to the space between the road and the bottom of the trailer. These rules were designed with one problem in mind: decapitations due to rolling under the trailer. I think there was a rash of fatal wrecks that would have been non-fatal if the car was taller or the trailer was shorter, or something. I am under the impression that the rules are generally met these days by installing metal frames that stick down from the bottom of the trailer and prevent cars from fitting underneath.

Could it be that this trailer was out of spec? Could that be why 1. there is little damage to the truck, 2. The body is mostly intact but the roof is gone, and 3. the car did not detect the obstacle?


The US requires a rear bumper on most semitrailers [1] to prevent underrunning on rear-end collisions. But it does not require side bumpers.

[1] https://www.law.cornell.edu/cfr/text/49/571.224


But it won't do much to save you if you look up from your phone and swerve at the last minute but and only clip the outside of the trailer.

(look at the NHTSA crash test videos on youtube)


I didn't realize that! Thank you.


You're talking about the 'Mansfield bar'[0], or 'underride bumper' named after a famous case of a celebrity killed by rear-ending a truck.

[0] http://www.sparebumper.com/index.php?act=viewProd&productId=...


A lot of underride guards are still capable decapitating people at low speeds. http://www.iihs.org/iihs/news/desktopnews/underride-guards-o...


None of the trailers around here are like that, fwiw. All tall and open underneath.


Depth estimation from cameras is challenging. I've seen recent research in generating depth maps using camera arrays, and even with precision camera calibration and state-of-the-art techniques, there are typically many glaring artifacts in the resulting depth map.

It might be that cameras introduce enough noise and artifacts into the mix that its not worth integrating them at this point.


> The radar and ultrasonics are used to detect vehicles

I don't understand why the truck wasn't detected. Not exactly a small target.


Because the radar is at headlight level, and the truck had a lot of ground clearance.


Well that's clearly a design flaw and speaks to the lack of engineering on the part of Tesla which I find frightening.

In a different domain, nuclear power plant design, the Three Mile Island plant had a solenoid activated valve for letting high pressure out of the reactor vessel. The problem was that although the plant operators said set the valve to close, the valve was stuck open. It turns out that the control panel light signifying that the valve was closed simply displayed the signal sent to the solenoid, not that the valve had actually closed. No secondary valve, no meter to determine flow.

There had been a operator's manual change for TMI and other plants of the same design, but the proper hardware change was never made. Hence TMI.

But I worry about the kind of thinking as in TMI and also in Tesla.


FYI the dashboard on the Tesla shows exactly what the sensors are seeing, so in some respects it's a full feedback loop unlike the 3MI example above.


Well, yes, they got that right, but I was speaking in terms of not thinking through the engineering in a mission critical design (and in the case of the TMI reactor it had to be approved by the Nuclear Regulatory Commission or whatever its predecessor was). TMI failed in this level of design thinking which would seem fairly obvious (as did NRC in plant design review) scares me in terms of overall engineering thinking. (In other words, if they got that wrong, what else did the designers get wrong).

In the case of Tesla, they have to engineer so that the cars knows about obstacles before crashing into them. The fact that Tesla did not think through this pretty obvious scenario is frightening and suggests a fault in their overall process in engineering. I just feel that Tesla cannot be trusted in terms of engineering unless they have some very smart group check their design.

I doubt BMW would have made such a mistake.


To be honest, I think BMW would have made exactly that mistake because nobody in Germany would ever be stupid enough to build a 65mph freeway with an uncontrolled crossing like that. And don't get me started on those highways on the east coast that have on ramps leading into the fast lane.


If human drivers can deal with it driverless cars certainly can. Driverless cars aren't too timid to accelerate at 100% for a merge or to dart across a break in three lanes of traffic to take a left if the vehicle is capable of doing so effectively. Most people are.

If anything uncontrolled intersections and high speed difference merges (200ft radius 270* on ramp I'm looking at you) more efficient to handle with driverless cars, provided they're capable of identifying the road (easy) all the non-static participants (hard).


Please, stop with the fearmongering. You are coming to some weird conclusions.


No, I am someone who has a BS EE/CS degree that worked four years as an engineer at a large firm. It is not fear mongering. It is legitimate questioning about engineering, from someone who has the experience.

For all of his self-promotion and as head of a car company and a rocket company, Elon Musk has never worked as an engineer for a large firm where he'd be mentored in engineering.

For example, the CEO of GM is an engineer (EE) who was mentored in engineering. https://en.wikipedia.org/wiki/Mary_Barra

And the same for IBM: (EE & CS) https://en.wikipedia.org/wiki/Ginni_Rometty


> lack of engineering on the part of Tesla which I find frightening.

That looks a tad like fear mongering.


> That looks a tad like fear mongering.

Please be specific. What is the fear mongering? It is a statement from someone educated as an engineer and trained as an engineer.

Engineering, whether it is designing buildings, cars, airplanes, is thinking through the scenarios and testing for them. This seems to be a situation where they were ruling out overhead signs, but because of trucks clearance of these signs would be well over 10 feet and they could check for anything under 10 feet or 8 feet and should have.

And, as I said before, if they didn't think this through, what else that we can't see are they not thinking through?


Just because you're an engineer doesn't make all of your arguments automatically correct. I've got the word "Engineer" in my job title but I don't beat people over the head with it when I want to make an argument for a technical position.

It's not like they haven't thought this through. Tesla has been clear that this is a Driver Assistance, not a full autonomous car. There are multiple warnings, including every time you enable auto pilot that makes it clear you must pay attention.

This accident while tragic from current reports looks like the driver was not paying attention to the road. The best way that we're going to get to fully autonomous cars is to collecting real world data and I think auto pilot has been a reasonable approach in that direction.


It's odd how desperate some people are to give Tesla a free pass.

Basically you're saying that some level of collateral damage is acceptable, and because there's a warning, it's the driver's fault.


That "it's the drivers fault" position is going to be even less tenable if (or, unfortunately, when) someone not in the car is killed or maimed.

To be clear: if someone not paying attention causes a crash, it is their fault. This does not, however, absolve Tesla (or any other manufacturer) from the responsibility for releasing an insufficiently-tested and potentially dangerous system into an environment where anyone who has a passing familiarity with human nature knows it will not be used responsibly.

Tesla calls it 'beta' software, and went so far as to describe the deceased driver as a tester in its press release after the crash. Again, anyone who understands human nature can see that this is a cynical attempt to manipulate the public's opinion. It may, however, come back to bite them, when people start asking WTF they were thinking when they put beta software on the road in the hands of ordinary drivers.


> I've got the word "Engineer" in my job title

If by that you mean you're a software engineer, I do think it's fair to distinguish between getting an EE/CS degree vs say a BA in CS plus programming experience.

(I write this as someone in the latter group.)


Yeah, the title is heavily overloaded for sure.

The meta point I'm trying to share is that if you're trying to convince a technical audience(which HN certainly is) berating people by saying "I know better because I have this slip of paper" is the quickest way to get someone to dig in and dismiss your idea.

Engineers are natural skeptics, arguing from the technical side will always be the stronger position.


I no longer work as an engineer. I apply the engineer and safety rules developed for airlines, nuclear power, oil & gas to patient safety. We put in "forcing functions" to ensure safety. For example, a driver cannot pit their car in reverse without their foot on the brake. As far as I can tell the design flaw in the Tesla (which I have already Stated) is that the object detection radar sensor did not see high enough for clearance of the car. Musk mentioned in a twitter feed that they didn't want to detect overhead signs, but the signs must be high enough to clear large truck so the radar sensors should have been looking 8 to 10 feet vertically but the were not. That is why the car crashed. Not operator error. Not beta software. Not going 74 miles per hour. The car lacked the appropriate radar sensor to detect vertical 8 to 10 feet or so. Musk and supportes can spin but the engineer asks why the sensor wasn't there. To me, it appears to be a flaw in the thought process and it is hard to understand how the autopilot could be allowed to be turned on without the proper object detection sensor. And this basic flaw makes me concerned about other flaws in the design.


As an engineer, I try to imagine what the risk analysis for the Tesla cars look like for the Autopilot functionality.

Multiple warnings would seem to me to be insufficient to reduce the hazard presented by the Autopilot functionality; indeed, there are any number of videos of Tesla drivers using the vehicle contra to the warnings. As one specific example, consider that the Tesla Autopilot cautions the driver to maintain hands on the wheel[0], yet does not enforce this requirement[1] despite having the capability to do so[2]. There's also a problem with Musk viz. marketing: he's the very public face of the company, and he is frequently overly optimistic in describing the car's capabilities by blurring the line between current and future capabilities, e.g., implying that holding the wheel isn't critical with a wink-wink, nudge-nudge[3].

Tesla's PLM certainly has some sort of mechanism to continuously examine the risk analysis for the car, yet the Autopilot functionality doesn't seem to have been significantly updated to incorporate the changing risk profile. To add on all of this, why does Autopilot allow one to speed? Why can one enable Autopilot on a road such as the exemplar if it is contrary to the instructions and the car is capable of knowing the difference? What does the risk analysis say on the topic of the feature name "Autopilot" being misunderstood by the public? &c. &c.

I do wonder about the engineering processes at Tesla. I admittedly don't work in automotive, but I do work in a regulated industry, and Tesla's apparent engineering process makes me very uneasy. Risk analyses that I have done took into account that the user may not have read the instructions for use, and I struggle to understand how Tesla could not do the same.

[0] "Drivers must keep their hands on the steering wheel." https://www.tesla.com/presskit/autopilot

[1] "We drove for 10 miles without the message appearing." http://www.teslarati.com/what-happens-ignore-tesla-autopilot...

[2] It's not clear that they have a capacitance sensor as on properly-equipped Mercedes, but Teslas allegedly can detect minute torques applied to the steering wheel as happens when the wheel is held. I recall reading this about the Teslas, but can't find a citation at the moment.

[3] “It works almost to the point where you can take your hands off,” Musk laughs, “but we won’t say that. Almost.” http://www.wired.com/2015/10/tesla-self-driving-over-air-upd... also, "But by April, he told a conference that Autopilot was "almost twice as good as a person," even in its first version." and from the same, "Musk himself has retweeted news reports showing drivers using Autopilot with no hands on the wheel." http://www.autonews.com/article/20160705/OEM06/160709956/tes...


This makes me wonder two related things:

1. Can the fact that the driver is not required to maintain physical control of steering implements cause the computer algorithms to be considered legally culpable (and, by extension, Tesla) for failing to act as a driver is legally required to under Florida law?

2. Does the statements of Elon Musk et al imply that the car is fit to act as a driver in its legal responsibilities under the rules of the road (independent of any legal ability for a driver to discharge those responsibilities to the car), so that failing to, say, be able to detect an obstruction in traffic constitutes a defect such that it's violating the implicit warrant of merchantability?


Thank you for demonstrating a true engineer's approach to the issue.


> That looks a tad like fear mongering.

That looks a tad like denial, especially in the light of your follow-up.

There are sound arguments to be made here. There's no need to make it personal.


The problem is, as far as I know, if you point the radar upwards, you will start detecting structures above the street like traffic lights and signs.


Well, that might very well be true, but that's what engineering is about, thinking how to make it work.


Not sure why would that be a problem.


Well, I have a collision warning system in my Mercedes, and recently I drove through a narrow road which had loads of grass just hanging onto the road - the collision warning system was going nuts, beeping constantly, and had I not been applying throttle myself it would have braked the car on its own. I definitely remember reading that Teslas generally disregard anything at roof level to avoid tree branches interfering with the autopilot in the similar way the grass was interfering with my Mercedes - I can only imagine Tesla engineers asking themselves "how often, exactly, are you going to have a stationary metal object at exactly roof height, on a highway? If it's at that height on a public road, it can't be a solid/permanent object" - turns out, it can be, as rare as it is.


Clearly, any kind of always-on automation that is responsible for braking, must be able to gauge the strength and importance of obstacles. For example, driving through that grass slowly and carefully, is ok. If there is a wire hanging across a road, then that is definitely something to stop for.

I'm thinking that a very good visual AI would be necessary to make distinctions like this; radar won't even see a wire, I don't think, and to lidar, it would look a lot like the grass. A touch-sensitive coating on the car would probably be a good idea too, so that the car can tell when if it is starting to scratch its paint on what it thought was an insignificant obstacle, like a branch.

Simply not looking at obstacles in one area because of the difficulty of rejecting false positives is a terrible idea, and demonstrates that that AI is not ready to drive.


Musk responded with this regarding radar questions:

"Radar tunes out what looks like an overhead road sign to avoid false braking events"

https://twitter.com/elonmusk/status/748625979271045121


Overhead road signs have a huge highly clearance (think of those tall trucks). I don't understand why they didn't check for clearance of 10 feet.


Tesla's radar probably just looks outward in a horizontal plane. That's what the old Eaton VORAD and most of its successors did. There are newer automotive radars which scan both horizontally and vertically [1], but Tesla doesn't seem to have those.

When I was developing Grand Challenge software, I used to have an Eaton VORAD looking out my window at home, tracking vehicles going through an intersection. It could see cars, but usually not bicycles. Range and range rate were good; azimuth info was flaky. Stationary objects didn't register because it was a Doppler radar. Output from the device was a list of targets and positions, encapsulated in Serial Line Interface Protocol.

The big problem with these radars is not seeing the road itself as an obstacle. When you're moving, everything has a Doppler radar return. Usually, the road is hit at such an oblique angle that it doesn't reflect much. But there are exceptions. The worst case is a grating-floor bridge.

LIDAR isn't a panacea. The charcoal-black upholstery used on many office chairs is so non-reflective in IR that a SICK LMS can't see it at point-blank range.

[1] http://www.fujitsu-ten.com/business/technicaljournal/pdf/38-...


Also, aviation radars are built to reject signals from objects that are stationary relative to the ground so they aren't swamped by signals from the ground itself ("ground clutter"). Does Tesla's radar do something similar? If so it'll have no trouble picking up other cars traveling along the road but won't be able to see a stationary truck.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: