Typically the police and NHTSA investigate car accidents, not the NTSB.
While the NTSB don't typically investigate car accidents, they do for transportation (truck) accidents. It's likely that the NTSB investigated because it involved a truck with a possible systemic issue, with cars going under trucks, and the Tesla autopilot being a factor.
One noted NTSB rule is that Tesla or other parties [0] do not comment regarding the NTSB investigation, except with the NTSB senior investigators' permission.
Thanks for commenting on the structure of the investigation - having skimmed dozens of NTSB plane crash reports and digging deep into a few (ex: RAR P-51) I'm very, very impressed with their diligence.
This is one of the few places I feel the simple "shouting fire in a crowded theatre" analogy holds true. It's in the greater interest of the overall public good that the parties involved in an NTSB investigation are not given the opportunity to interfere with the process of the investigation.
Also it's not actually a gag order. I can't find any actual legal regulations which leads me to the assumption that this is less an NSL style "comply or you will be silenced" sort of situation. All the evidence I could find makes it seem like more of a "gentleman's agreement" developed over the decades of interacting with an extremely small pool of actors. ( American Railroads + American Truck Makers, and Airlines operating in US airspace ) Infact I grabbed this exemplary ( if a little long ) quote off the NTSB website from a press release regarding the FAA accidentally releasing investigation information when complying with a FOIA request before the investigation was complete. ( The NTSB don't care about the FOIA request, just that you wait until their job is done before answering it.)
>> The NTSB depends upon full participation and technical assistance by the parties in our accident investigations in order to ensure that our investigations are objective, rigorous, and complete. Allowing any party to release investigative information without approval may enable that party to influence the public perception of the investigation and undercut the fairness of the process.
>> Accordingly, we require that any release of information related to an ongoing accident investigation be coordinated and approved by the NTSB prior to its release. When the investigation is complete, these restrictions no longer apply.
Not much new yet. Vehicle speed was 74MPH. Autopilot was engaged. The Tesla was not under power after the crash. There are pictures of the semitrailer, which was not damaged much by the underrun, and of the Tesla vehicle, where everything above hood level was bent back or sheared off.
The side of the trailer is corrugated metal painted white. It's not a smooth white surface. The vision system should have been able to range that.
The vision system should have been able to range that.
Please understand that the vision system in a Tesla isn't like your vision system. There is no AI which is constructing a model of a 3D world out of 2D visual data, with a road surface and 3D objects located within it. There is no human or higher-mammal level of comprehension of the scene. There are probably a series of algorithmic tricks that enable the car to determine in which direction the distant road is. The computer can then meld that information with the other shorter ranged sensors in the car that do return distance data.
The reason why things like LIDAR are used in self-driving cars, is that these systems can numerically build a model of the 3D scene without having to have an AI reconstruct a 3D scene out of 2D camera data. They return distance information, so the data starts out as 3D, so far less interpretation is necessary. In all likelihood, nothing in a Tesla understands what a truck trailer is, so how is it going to interpret that set of 2D optical data as an object that's like a moveable wall suspended a few feet in the air? There's probably only a rudimentary notion of obstacle in the software.
Mobileye claims more than that. See their promotional video on rear-end collision avoidance. [1] See them displaying distance to target. You can buy this as a smartphone app (!), or as a retrofit.[2]
Here's a long theory talk by Mobileye's CTO and co-founder.[3]
Obviously, they are working on interpreting camera data to 3D and they are not finished yet. They did specifically state that their software isn't ready to detect the crossing semi trailer. (Not until 2018)
I'm highly skeptical that they'll ever be able to differentiate extremely similar targets, at distance, with a relatively cheap camera, working in the visible spectrum. I'd love to hear their tricks to getting that to work. In the meantime, people creating actual driverless cars are using LiDAR for a reason. 20Hz, full 360 degree view with a real range to target measurement, not an algorithmic estimate.
I wonder if Elon will ever live down his comments that LIDAR "doesn’t make sense" and is "unnecessary" in the context of an autonomous car after this[1].
Musk's reasoning is as follows: computer vision is getting better and cameras will always be cheap. We use LIDAR today because computer vision isn't that great.
Although, LIDARs and sensors that do the same thing as LIDAR are getting cheaper. One thing that might change the game is the development of sensors that don't require mechanical scanners. DARPA recently demonstrated a non-mechanical way to scan a laser beam very fast and mm-wave radar is starting to approach the capabilities of LIDAR[1].
LIDAR can work quite well in rain and fog with proper processing. There are range gated imagers for that.[1] You tell the imager to ignore anything for a delay of N nanoseconds, and you don't see the fog reflections out to 2N feet. You can run the range gate in and out until you see through the rain and fog. These are available as hand-held devices.
This technology was used on ships in fog back in 2004, but now that it's down to hand-held size, it seems to be more of a military thing.
There are lots of interesting things you can do with LIDAR that the Velodyne people don't do. "First and last", for example. But enough for tonight.
I'm not being a proponent of LIDAR. I'm trying to explain the difference between interpreting 2D visual data as 3D and what a device like LIDAR does. RADAR does work in rain and fog.
Looking through rain with LIDAR is like looking through chaff for radar. An X-ray machine wouldn't work very well if there was a cloud of lead dust in the air.
Does any current optical technology work any better in rain and fog? Not that 'optical' is a requirement anyway - what is needed is anything that does work.
The radars other manufacturers use work well - e.g. my Mazda will detect obstacles and start braking even in weather conditions where optical visibility is way worse. Pretty much all other car manufacturers use such radars mounted in front for the adaptive cruise control systems (front) and blind spot monitoring (back).
This is definitely a cost to benefit calculation. LIDAR is not cheap at the moment. In years to come we may have the perspective that LIDAR is necessary but at the moment it doesn't seem worth it.
A Tesla isn't cheap either, and Tesla didn't have to introduce their beta-level software/hardware to the public and then claim "you're using it wrong" when some guy's head gets sheared off by a semi.
No one suggested that it needs an intimate understanding of trucks. It needs to be able to tell the difference between empty space and not empty space. It needs to be able to do that for the entire volume the vehicle will occupy not just some of it. Otherwise it will run into trees, wires strung up to decapitate motorcyclist[1], farm animals and other obstacles that may or may not extend all the way to the ground directly in front of the sensor.
No one suggested that it needs an intimate understanding of trucks. It needs to be able to tell the difference between empty space and not empty space.
To do that, it has to understand the truck as a light colored rectangular prism with corrugated metal sides, suspended a few feet off the road surface by other structures. (wheels) I don't mean that the Tesla has to understand trucks and interstate trucking. I just mean that it understands it's a certain kind of object that's an obstruction. Doing this from an image isn't trivial. That's why LIDAR is so often used.
The demo in the YouTube video is using stereo cameras. Does a Tesla have stereo cameras? Also, it's one thing to have something that can infer distance in a demo. It's another thing entirely to have it operate with the kind of reliability you'd need for deployment as a consumer car autopilot.
I don't know anything about the vision system, but I noticed that this information doesn't match well with the statement from Tesla.
The intersection appears to be at the bottom of a hill, not the top, as I expected based on Tesla's account re: white truck vs white sky.
I'm no traffic accident reconstruction expert but it appears to me that the top of the truck would have been below the horizon. Indeed, the photo is taken looking east, the direction the Tesla was traveling in.
In fact they're both right. Indeed this is at the bottom of a hill, but the intersection in question happens to be behind a small hump in the road, conveniently covered in a shadow in Google Street View (whereas the road behind it is brighter, sunlit):
If you squint or zoom-in, the light square on the right hand side of the road is in fact a vending-machine-stocking truck parked at a gas station on the other side of the intersection where this all happened, serving as a useful feature-height comparison; the semi was probably a little larger than that but similarly its wheels were likely underneath the horizon provided by the hill of the road. You can therefore imagine that more than half of the trailer might have been under the horizon of the hill, including wheels etc. -- frustrating any sort of machine visual inspection, 'cause that's even difficult by human eyes.
This Google Map image is 440m away from the actual crash site, which at 74mph corresponds to about 13.3s of reaction time. (The extra time from going at 65mph would would have only been an extra 1.8s.) It was intentionally chosen to be like "here is where a driver travelling East would have had no idea that there was a semi beginning to turn, pulling out onto the road in front of him."
The other data point needed is "here is where an attentive driver would have had no excuse for not knowing there was a semi in front of him;" that's about here:
This is 360m away, giving at least 11s of advance notice to the oncoming driver. Even with 5s of reaction time before he slams on the brakes, that should have been plenty of time. Clearly the inattentive driver is therefore a huge problem here, and the driver was presumably inattentive because he had been goaded by the ease of the driving system.
It's much harder to tell whether the semi should have seen the Model S and yielded right of way with this setup; as mentioned at 440m or so the Tesla should have been likewise invisible to a truck-trailer, so that's presumably when he would have started turning; it's not clear to me whether he would have then seen the car before his truck-trailer entered the oncoming traffic and he would have had no better option than stepping on the gas.
But it does sound like the Tesla's road detection is based on a visual algorithm rather than something more obvious like radar, and certainly both when you are too far from this sort of truck it can be nondistinct (as the road curves too much for you to see its wheels) and if you get too close it probably also becomes nondistinct (the closer you get the larger it is in your digital field)...
> The side of the trailer is corrugated metal painted white. It's not a smooth white surface. The vision system should have been able to range that.
I'm not sure if the vision system is smart enough to be used for collision detection. The radar and ultrasonics are used to detect vehicles, but I believe the camera is just used to help follow the lines painted on the road.
Some years back, some federal organization made new rules for semi-trailers with regard to the space between the road and the bottom of the trailer. These rules were designed with one problem in mind: decapitations due to rolling under the trailer. I think there was a rash of fatal wrecks that would have been non-fatal if the car was taller or the trailer was shorter, or something. I am under the impression that the rules are generally met these days by installing metal frames that stick down from the bottom of the trailer and prevent cars from fitting underneath.
Could it be that this trailer was out of spec? Could that be why 1. there is little damage to the truck, 2. The body is mostly intact but the roof is gone, and 3. the car did not detect the obstacle?
Depth estimation from cameras is challenging. I've seen recent research in generating depth maps using camera arrays, and even with precision camera calibration and state-of-the-art techniques, there are typically many glaring artifacts in the resulting depth map.
It might be that cameras introduce enough noise and artifacts into the mix that its not worth integrating them at this point.
Well that's clearly a design flaw and speaks to the lack of engineering on the part of Tesla which I find frightening.
In a different domain, nuclear power plant design, the Three Mile Island plant had a solenoid activated valve for letting high pressure out of the reactor vessel. The problem was that although the plant operators said set the valve to close, the valve was stuck open. It turns out that the control panel light signifying that the valve was closed simply displayed the signal sent to the solenoid, not that the valve had actually closed. No secondary valve, no meter to determine flow.
There had been a operator's manual change for TMI and other plants of the same design, but the proper hardware change was never made. Hence TMI.
But I worry about the kind of thinking as in TMI and also in Tesla.
Well, yes, they got that right, but I was speaking in terms of not thinking through the engineering in a mission critical design (and in the case of the TMI reactor it had to be approved by the Nuclear Regulatory Commission or whatever its predecessor was). TMI failed in this level of design thinking which would seem fairly obvious (as did NRC in plant design review) scares me in terms of overall engineering thinking. (In other words, if they got that wrong, what else did the designers get wrong).
In the case of Tesla, they have to engineer so that the cars knows about obstacles before crashing into them. The fact that Tesla did not think through this pretty obvious scenario is frightening and suggests a fault in their overall process in engineering. I just feel that Tesla cannot be trusted in terms of engineering unless they have some very smart group check their design.
To be honest, I think BMW would have made exactly that mistake because nobody in Germany would ever be stupid enough to build a 65mph freeway with an uncontrolled crossing like that. And don't get me started on those highways on the east coast that have on ramps leading into the fast lane.
If human drivers can deal with it driverless cars certainly can. Driverless cars aren't too timid to accelerate at 100% for a merge or to dart across a break in three lanes of traffic to take a left if the vehicle is capable of doing so effectively. Most people are.
If anything uncontrolled intersections and high speed difference merges (200ft radius 270* on ramp I'm looking at you) more efficient to handle with driverless cars, provided they're capable of identifying the road (easy) all the non-static participants (hard).
No, I am someone who has a BS EE/CS degree that worked four years as an engineer at a large firm. It is not fear mongering. It is legitimate questioning about engineering, from someone who has the experience.
For all of his self-promotion and as head of a car company and a rocket company, Elon Musk has never worked as an engineer for a large firm where he'd be mentored in engineering.
Please be specific. What is the fear mongering? It is a statement from someone educated as an engineer and trained as an engineer.
Engineering, whether it is designing buildings, cars, airplanes, is thinking through the scenarios and testing for them. This seems to be a situation where they were ruling out overhead signs, but because of trucks clearance of these signs would be well over 10 feet and they could check for anything under 10 feet or 8 feet and should have.
And, as I said before, if they didn't think this through, what else that we can't see are they not thinking through?
Just because you're an engineer doesn't make all of your arguments automatically correct. I've got the word "Engineer" in my job title but I don't beat people over the head with it when I want to make an argument for a technical position.
It's not like they haven't thought this through. Tesla has been clear that this is a Driver Assistance, not a full autonomous car. There are multiple warnings, including every time you enable auto pilot that makes it clear you must pay attention.
This accident while tragic from current reports looks like the driver was not paying attention to the road. The best way that we're going to get to fully autonomous cars is to collecting real world data and I think auto pilot has been a reasonable approach in that direction.
That "it's the drivers fault" position is going to be even less tenable if (or, unfortunately, when) someone not in the car is killed or maimed.
To be clear: if someone not paying attention causes a crash, it is their fault. This does not, however, absolve Tesla (or any other manufacturer) from the responsibility for releasing an insufficiently-tested and potentially dangerous system into an environment where anyone who has a passing familiarity with human nature knows it will not be used responsibly.
Tesla calls it 'beta' software, and went so far as to describe the deceased driver as a tester in its press release after the crash. Again, anyone who understands human nature can see that this is a cynical attempt to manipulate the public's opinion. It may, however, come back to bite them, when people start asking WTF they were thinking when they put beta software on the road in the hands of ordinary drivers.
If by that you mean you're a software engineer, I do think it's fair to distinguish between getting an EE/CS degree vs say a BA in CS plus programming experience.
The meta point I'm trying to share is that if you're trying to convince a technical audience(which HN certainly is) berating people by saying "I know better because I have this slip of paper" is the quickest way to get someone to dig in and dismiss your idea.
Engineers are natural skeptics, arguing from the technical side will always be the stronger position.
I no longer work as an engineer. I apply the engineer and safety rules developed for airlines, nuclear power, oil & gas to patient safety. We put in "forcing functions" to ensure safety. For example, a driver cannot pit their car in reverse without their foot on the brake.
As far as I can tell the design flaw in the Tesla (which I have already Stated) is that the object detection radar sensor did not see high enough for clearance of the car. Musk mentioned in a twitter feed that they didn't want to detect overhead signs, but the signs must be high enough to clear large truck so the radar sensors should have been looking 8 to 10 feet vertically but the were not. That is why the car crashed. Not operator error. Not beta software. Not going 74 miles per hour. The car lacked the appropriate radar sensor to detect vertical 8 to 10 feet or so. Musk and supportes can spin but the engineer asks why the sensor wasn't there. To me, it appears to be a flaw in the thought process and it is hard to understand how the autopilot could be allowed to be turned on without the proper object detection sensor. And this basic flaw makes me concerned about other flaws in the design.
As an engineer, I try to imagine what the risk analysis for the Tesla cars look like for the Autopilot functionality.
Multiple warnings would seem to me to be insufficient to reduce the hazard presented by the Autopilot functionality; indeed, there are any number of videos of Tesla drivers using the vehicle contra to the warnings. As one specific example, consider that the Tesla Autopilot cautions the driver to maintain hands on the wheel[0], yet does not enforce this requirement[1] despite having the capability to do so[2]. There's also a problem with Musk viz. marketing: he's the very public face of the company, and he is frequently overly optimistic in describing the car's capabilities by blurring the line between current and future capabilities, e.g., implying that holding the wheel isn't critical with a wink-wink, nudge-nudge[3].
Tesla's PLM certainly has some sort of mechanism to continuously examine the risk analysis for the car, yet the Autopilot functionality doesn't seem to have been significantly updated to incorporate the changing risk profile. To add on all of this, why does Autopilot allow one to speed? Why can one enable Autopilot on a road such as the exemplar if it is contrary to the instructions and the car is capable of knowing the difference? What does the risk analysis say on the topic of the feature name "Autopilot" being misunderstood by the public? &c. &c.
I do wonder about the engineering processes at Tesla. I admittedly don't work in automotive, but I do work in a regulated industry, and Tesla's apparent engineering process makes me very uneasy. Risk analyses that I have done took into account that the user may not have read the instructions for use, and I struggle to understand how Tesla could not do the same.
[2] It's not clear that they have a capacitance sensor as on properly-equipped Mercedes, but Teslas allegedly can detect minute torques applied to the steering wheel as happens when the wheel is held. I recall reading this about the Teslas, but can't find a citation at the moment.
1. Can the fact that the driver is not required to maintain physical control of steering implements cause the computer algorithms to be considered legally culpable (and, by extension, Tesla) for failing to act as a driver is legally required to under Florida law?
2. Does the statements of Elon Musk et al imply that the car is fit to act as a driver in its legal responsibilities under the rules of the road (independent of any legal ability for a driver to discharge those responsibilities to the car), so that failing to, say, be able to detect an obstruction in traffic constitutes a defect such that it's violating the implicit warrant of merchantability?
Well, I have a collision warning system in my Mercedes, and recently I drove through a narrow road which had loads of grass just hanging onto the road - the collision warning system was going nuts, beeping constantly, and had I not been applying throttle myself it would have braked the car on its own. I definitely remember reading that Teslas generally disregard anything at roof level to avoid tree branches interfering with the autopilot in the similar way the grass was interfering with my Mercedes - I can only imagine Tesla engineers asking themselves "how often, exactly, are you going to have a stationary metal object at exactly roof height, on a highway? If it's at that height on a public road, it can't be a solid/permanent object" - turns out, it can be, as rare as it is.
Clearly, any kind of always-on automation that is responsible for braking, must be able to gauge the strength and importance of obstacles. For example, driving through that grass slowly and carefully, is ok. If there is a wire hanging across a road, then that is definitely something to stop for.
I'm thinking that a very good visual AI would be necessary to make distinctions like this; radar won't even see a wire, I don't think, and to lidar, it would look a lot like the grass. A touch-sensitive coating on the car would probably be a good idea too, so that the car can tell when if it is starting to scratch its paint on what it thought was an insignificant obstacle, like a branch.
Simply not looking at obstacles in one area because of the difficulty of rejecting false positives is a terrible idea, and demonstrates that that AI is not ready to drive.
Tesla's radar probably just looks outward in a horizontal plane. That's what the old Eaton VORAD and most of its successors did. There are newer automotive radars which scan both horizontally and vertically [1], but Tesla doesn't seem to have those.
When I was developing Grand Challenge software, I used to have an Eaton VORAD looking out my window at home, tracking vehicles going through an intersection. It could see cars, but usually not bicycles. Range and range rate were good; azimuth info was flaky. Stationary objects didn't register because it was a Doppler radar. Output from the device was a list of targets and positions, encapsulated in Serial Line Interface Protocol.
The big problem with these radars is not seeing the road itself as an obstacle. When you're moving, everything has a Doppler radar return. Usually, the road is hit at such an oblique angle that it doesn't reflect much. But there are exceptions. The worst case is a grating-floor bridge.
LIDAR isn't a panacea. The charcoal-black upholstery used on many office chairs is so non-reflective in IR that a SICK LMS can't see it at point-blank range.
Also, aviation radars are built to reject signals from objects that are stationary relative to the ground so they aren't swamped by signals from the ground itself ("ground clutter"). Does Tesla's radar do something similar? If so it'll have no trouble picking up other cars traveling along the road but won't be able to see a stationary truck.
The camera portion of Tesla's vision system produces a grayscale image that's then run through some algorithmic pattern detection to figure out what's what in the scene.
Here is an actual dataset [1] of stills from Daimler that's used to train algorithms for pedestrian detection.
The workings of this system are similar to the cameras that are mounted above intersections to detect waiting cars (instead of sensors embedded into the roadway) [2]
I work with imaging systems that identify medicines. While they are wildly simpler in terms of analysis (basically using the silhouette to calculate shape and size), most attempts to add colour to the equation ends up adding a new, complex element that the inspection can fail on. Mostly because colour is inconsistent under varying lighting conditions.
Again, very different systems, but might be relevant when considering if straight contrast is enough to work with or not.
My understanding is that the Tesla camera system just has a single camera so it won't be able to do normal binocular vision. And the fact that the pattern is just a set of horizontal lines would make life difficult for an egomotion style estimate of the pattern's distance. There is some pattern to the side of the truck besides the horizontal lines but it's pretty subtle and the accident happened when the car was heading East during the late afternoon so sun glare might have been a factor.
Part of my work deals with medical patient safety but I study all sorts of safety including airlines, nuclear power, oil & gas drilling and refining, ....
There should be a mechanical fail-safe if the power cuts in this case.
In our DARPA Grand Challenge vehicle in 2005, we had a non-computerized system for an emergency stop. A hardware timer had to be reset every 120ms by the computers. If it timed out, a relay dropped out, and an electric motor with two sources of DC power (the main power system, and a battery) drove the brake pedal down until a hydraulic pressure switch detected full brake pressure and turned it off.
In addition, the throttle control went through a pull cable device with an electromagnet. With the electromagnet on, a servomotor could operate the throttle. The emergency stop system would drop power on the electromagnet if the stall timer timed out, or on some other fault conditions. That forced the throttle to idle.
Then we had an Eaton VORAD radar. That data went into the main mapping system, along with LIDAR data, but it also was processed by a simple separate process that computed time to collision from range and range rate, and if it didn't compute a safe distance, or didn't reset the watchdogs, tripped the emergency stop system. If this happened, the LED sign on the back of our vehicle displayed "COLLISION IMMINENT".
This happened once during the Grand Challenge preliminaries. Several vehicles were in the starting gates side by side. We were ready to go, all systems running and armed, waiting for DARPA to release the hold signal they were sending by radio. The organizers decided to release the CMU vehicle first, and it came out of the starting gate and cut in front of our vehicle. The safety systems tripped and "COLLISION IMMINENT" appeared in the sign. After a few seconds, with the threat gone, the system reset and the sign went dark.
This was all fully automatic. There was also a remote engine kill system, required by DARPA.
We didn't win. But we didn't crash or hit anything. There were Grand Challenge entries that ran away, including, in 2004, one from CMU. Another one ran away because they filled their disk with logging info and this stalled the software. Steering and throttle froze, and the vehicle ran away until it hit something.
If you work on automatic driving, you have to prepare for trouble like this.
If your battery dies while the car is running (say, even, that something causes a physical disconnect between the batteries and the rest of the car — a wiring fault, or whatever). Ideally you would be able to pull to the side of the road while your vehicle coasts, braking as necessary.
If the system detects a power disconnect and instantly engages all brakes, does that help or harm? Additionally, since powered items like anti-lock brakes are now unavailable, how hard should the brakes be engaged? Fully? Slightly?
Slamming on the brakes in a failure scenario is not automatically the right answer. Odds are it's probably the wrong answer more often than not.
To answer the question, it's quite possible to have a braking system that engages in the event of power failure. I've heard of even double-redundant braking systems, so three total systems have to fail before the brakes fail. It was on a Rolls Royce as I recall. The owner was crowing about how his brake job cost more than my car.
If there is no power than chances see there is limited steering as well. Probably safer to have motionlessn 2-ton rock rather than a 2-ton ballistic missile.
Power steering might be off, but that doesn't mean steering is limited.
A motionless 2-ton rock in the middle of a busy interstate because of a power blip is a terrible idea. And again, how hard exactly should the system brake? Pick a value between 0% braking and 100% braking that brakes maximally without locking up the brakes.
A 2-ton brick spinning down a busy interstate because the brakes locked up is arguably even worse than a motionless one.
Wait, do regular cars engage the emergency brake in a fail-safe way? I've always treated the emergency brake as a way to keep a stopped car from moving, not as a way to make a moving car stop.
No, a regular car will not engage brakes without human interaction. The fail-safe is having two independent systems - hydraulic for the foot brake and wire rope for the hand brake in most vehicles.
Additionally automatic transmissions have a transmission lock, but that won't work while the vehicle is in motion.
Some modern cars use electric systems for both, I'm not sure how that would work.
Also, the hydraulic brake system is built with redundancy (dual circuit), so even a sudden big leak in a brake line will leave you with some braking power.
And the power braking system, being pneumatic IIRC, keeps working for a couple of hard stomps on the pedal even if the engine stops running and you lose 12V.
Pretty much the only thing you can expect to lose is the ABS. Even then, I understand that system has a failsafe such that it keeps the car from spinning in the event of malfunction and brake lockup. You can see this in ABS-related accidents as straight skidmarks. But I don't think that works when you've lost electric power.
Edit: actually, the recent Koenigsegg One:1 high speed crash (driver not hurt) during testing at the Nurburgring was an ABS sensor failure, you can see the hallmarks in photos. Koenigsegg also deserve big props for having been completely open about it.
No, the handbrake (not emergency brake) is actually connected (mechanically) to a completely separate set of brakes on the rear wheels. Only on custom-built drift cars á la Ken Block is the handbrake connected to the normal brake calipers.
It depends on the car. Some of them engage the same calipers the hydraulic system uses. Some have a separate caliper or drum. Older, drum brake cars engaged the same shoes the hydraulic system used. And if you go back far enough, some had mechanical pawls or band type brakes that engaged at the transmission.
And drift cars use a separate hydraulic brake attached to the rear disks.
This is correct. Some emergency brakes uses a screw mechanism to activate the brake cylinder piston to push on the pads. Same stopping mechanism as if you push on the brake pedal.
Others (usually less expensive cars) have a set of drum brakes inside the disk brake that act as emergency brakes.
If you have rear drum brakes, it's the same as my first example. The emergency brake activates the normal braking system.
Maybe. But that's the main (traction) battery. Certainly regenerative breaking is out, but the 12V battery should not be disengaged and provide enough energy to apply the brakes. Not sure how the Tesla is actually engineered though, as I'm a Leaf owner.
I haven't been following this case particularly closely, but looking at the pictures it seems like a dangerous place to rely on autopilot with the intersections. Whether Tesla's system can or will be improved from this accident doesn't appear clear at this point. But it does seem hard to argue that fault rests with Tesla considering autopilot was being used in a manor that goes against their instructions. It also makes me wonder if it Tesla might be better off with some sort of whitelisting system that prevents autopilot from being engaged on roads like this. That would certainly reduce the risk of accidents resulting from misuse, although I guess it might open them up to more blame in the event of a possible accident.
> looking at the pictures it seems like a dangerous place to rely on autopilot with the intersections
To some extent, I agree--uncontrolled intersections on highways are dangerous.
On the other hand, in this particular case, a 14 foot tall, 75 foot long, 40-ton obstacle blocking the roadway is probably something any autopilot should be able to detect and attempt to avoid. This was not a case of the Tesla not being able to stop in time because someone pulled out in front of them. The Tesla did not slow down at all from its cruising speed of 74mph before impacting the trailer. It simply did not detect the obstacle.
That obstacle was only on the road because it was an intersection. It is extremely unlikely that a truck would be in a similar position on a hypothetically Tesla approved autopilot road unless an accident occurred directly in front of you and Tesla likely has other ways of detecting such an accident. It is similar to if someone was using autopilot and drove through a red light and was t-boned. It isn't a good look for Tesla, but I can't fault them for not preventing the crash when the car was clearly being used under conditions they recommend against.
It's clear to me that the auto steer technology is pretty far from an "auto pilot", but the drivers are not making the distinction.
I think that semi-automatic systems like this are fundamentally broken due to unrealistic expectations in the man-machine interface.
There is virtually no chance that the driver will be alert and able to detect and correct problems after hours of uneventful driving, including previous driving.
Even if the auto assist feature signals that it is confused, there is probably very little the driver can do until it's too late.
One example is the AF 443 crash where the auto pilot disengaged due to a significant but not immediately threatening mechanical problem, and the pilot (flying) got so distressed that instead of following the check lists (memory item even) he more or less just dropped out of the sky in a massive stall that took about 2 minutes.
I digress but a speculation is that in that particular case, the pilot was not helped by the fact that the airplane changed the input mode (or "law") as the sensor data was not complete due to same problem - the pilot's input was nominally consistent with the normal flight law but was in reality inducing a fatal stall.
I have also been surprised that streets/highways aren't being whitelisted, and I wouldn't be surprised if we start to see that as autonomous vehicles become more mainstream.
I can imagine a system that does not allow autonomous driving on roads that have not been surveyed in, say, the last 24 hours by either another sensor equipped vehicle or perhaps a drone. When planning a route, the autonomous driver would either plan around a road that had not been surveyed within that time frame or plan a place to stop and ask the driver to take over well ahead of time.
Isn't it dangerous to rely on autopilot anyway? I don't drive a Tesla but I was under the impression that their instructions were "keep your hands on the wheel and feet ready to take manual control". It probably says something along the lines of "follow all applicable traffic laws" too, something which precludes doing 75 in a 65.
Hard not to notice that the Tesla was doing 10mph over the speed limit. Keeping to the speed limit is probably a better idea for auto pilot as a general rule, though it may not have made a difference in this particular accident.
In Europe that road would have a 40mph or maybe 50mph speed limit. A 65mph road would not have crossings without traffic lights, and a 75mph road would not have crossings at all.
Yet Tesla's autopilot apparently drove 74 on this road? Not that I think that speeding was the cause here (i.e. going 65 would not have prevented probably) but I think there should be a special sort of fine for speeding autopilots, that's just not acceptable at all..
yeah, rural crossings are dangerous in the US, but it's probably not worth the inconvenience and cost of lowering the speed limits or adding stops.
people expect to use these things to travel hundreds of miles in a reasonable amount of time. there are literally thousands of these highways, and probably hundreds of thousands of crossings.
To be fair, it looks like the visibility is great, I don't know the weather conditions at the time of the accident but I feel the 'driver' must not have had his eyes on the road at all. It's not like a combinator like that speeds onto a highway, it was probably visibly entering it from a mile away.
They might want to put an overpass, separation rails, reflective distance markers around the entrances/exits, and generally do something about road safety.
An interesting tidbit from wikipedia: "[highway] is not an equivalent term to Controlled-access highway, or a translation for autobahn, autoroute, etc."
The problem is that there are many thousands of places that can use improvements. Individually they're not necessarily expensive, but collectively the cost would be astronomical.
I-5 is the primary North-South highway for the US west coast. In Oregon until recently there were numerous stretches without anything separating the traffic other than a grass median. Many cars have crossed the median at high speed, sometimes resulting in fatal crashes.
That particular improvement cost $7 million. But, to riff on a comment from the late Senator Dirksen, "$7 million here, $7 million there, pretty soon, you're talking real money."
"The narrow 30-foot median contains only a low earthen berm"
Don't you have to be driving like a bit of an asshat to feel like 30 feet of earthen berm is not enough between your lane and the oncoming lane? That's a pretty big barrier and the road in question is straight and level.
Don't know what the speed limit is there, but when an asshat is out of control at over 40mph, I wouldn't consider 30 feet of grass a "pretty big barrier" between myself and them.
slightly off topic, but when I saw the images of the trailer, I couldn't help but think that the collision wouldn't have been fatal, if the trailer was fitted with "side underrun protection". IIRC those are required on trucks and trailers driving in the EU.
If it stopped the car from going under @ 74MPH, odds are the passenger would still have died. I doubt they're build to withstand that much energy, though.
I wouldn't use cruise control in tight areas, and I get extra attentive maneuvering around trucks. I wouldn't be surprised if Autopilot gets modified so that it locks-out in these precarious circumstances.
Looking at the photo of the car, the 'main body' must not mean what it implies to laypersons. Or perhaps 'generally' is a broader spectrum of conditions that I'd guess.
I'd guess they probably mean the main body as opposed to the roof section. You can see that the front segment for example hasn't significantly crumpled, like it would have in a frontal collision.
Apparently "Intact" implies a better condition than I'd realized? I was thinking more along the lines of "still in one piece".
The exterior bodywork is pretty dinged up, but even the driver side door still looks fairly smooth, and everything is still in generally the right place and looks like it may still be structurally sound at first glance.
While the NTSB don't typically investigate car accidents, they do for transportation (truck) accidents. It's likely that the NTSB investigated because it involved a truck with a possible systemic issue, with cars going under trucks, and the Tesla autopilot being a factor.
One noted NTSB rule is that Tesla or other parties [0] do not comment regarding the NTSB investigation, except with the NTSB senior investigators' permission.
[0] http://www.ntsb.gov/legal/Documents/NTSB_Investigation_Party...