No, I am someone who has a BS EE/CS degree that worked four years as an engineer at a large firm. It is not fear mongering. It is legitimate questioning about engineering, from someone who has the experience.
For all of his self-promotion and as head of a car company and a rocket company, Elon Musk has never worked as an engineer for a large firm where he'd be mentored in engineering.
Please be specific. What is the fear mongering? It is a statement from someone educated as an engineer and trained as an engineer.
Engineering, whether it is designing buildings, cars, airplanes, is thinking through the scenarios and testing for them. This seems to be a situation where they were ruling out overhead signs, but because of trucks clearance of these signs would be well over 10 feet and they could check for anything under 10 feet or 8 feet and should have.
And, as I said before, if they didn't think this through, what else that we can't see are they not thinking through?
Just because you're an engineer doesn't make all of your arguments automatically correct. I've got the word "Engineer" in my job title but I don't beat people over the head with it when I want to make an argument for a technical position.
It's not like they haven't thought this through. Tesla has been clear that this is a Driver Assistance, not a full autonomous car. There are multiple warnings, including every time you enable auto pilot that makes it clear you must pay attention.
This accident while tragic from current reports looks like the driver was not paying attention to the road. The best way that we're going to get to fully autonomous cars is to collecting real world data and I think auto pilot has been a reasonable approach in that direction.
That "it's the drivers fault" position is going to be even less tenable if (or, unfortunately, when) someone not in the car is killed or maimed.
To be clear: if someone not paying attention causes a crash, it is their fault. This does not, however, absolve Tesla (or any other manufacturer) from the responsibility for releasing an insufficiently-tested and potentially dangerous system into an environment where anyone who has a passing familiarity with human nature knows it will not be used responsibly.
Tesla calls it 'beta' software, and went so far as to describe the deceased driver as a tester in its press release after the crash. Again, anyone who understands human nature can see that this is a cynical attempt to manipulate the public's opinion. It may, however, come back to bite them, when people start asking WTF they were thinking when they put beta software on the road in the hands of ordinary drivers.
If by that you mean you're a software engineer, I do think it's fair to distinguish between getting an EE/CS degree vs say a BA in CS plus programming experience.
The meta point I'm trying to share is that if you're trying to convince a technical audience(which HN certainly is) berating people by saying "I know better because I have this slip of paper" is the quickest way to get someone to dig in and dismiss your idea.
Engineers are natural skeptics, arguing from the technical side will always be the stronger position.
I no longer work as an engineer. I apply the engineer and safety rules developed for airlines, nuclear power, oil & gas to patient safety. We put in "forcing functions" to ensure safety. For example, a driver cannot pit their car in reverse without their foot on the brake.
As far as I can tell the design flaw in the Tesla (which I have already Stated) is that the object detection radar sensor did not see high enough for clearance of the car. Musk mentioned in a twitter feed that they didn't want to detect overhead signs, but the signs must be high enough to clear large truck so the radar sensors should have been looking 8 to 10 feet vertically but the were not. That is why the car crashed. Not operator error. Not beta software. Not going 74 miles per hour. The car lacked the appropriate radar sensor to detect vertical 8 to 10 feet or so. Musk and supportes can spin but the engineer asks why the sensor wasn't there. To me, it appears to be a flaw in the thought process and it is hard to understand how the autopilot could be allowed to be turned on without the proper object detection sensor. And this basic flaw makes me concerned about other flaws in the design.
As an engineer, I try to imagine what the risk analysis for the Tesla cars look like for the Autopilot functionality.
Multiple warnings would seem to me to be insufficient to reduce the hazard presented by the Autopilot functionality; indeed, there are any number of videos of Tesla drivers using the vehicle contra to the warnings. As one specific example, consider that the Tesla Autopilot cautions the driver to maintain hands on the wheel[0], yet does not enforce this requirement[1] despite having the capability to do so[2]. There's also a problem with Musk viz. marketing: he's the very public face of the company, and he is frequently overly optimistic in describing the car's capabilities by blurring the line between current and future capabilities, e.g., implying that holding the wheel isn't critical with a wink-wink, nudge-nudge[3].
Tesla's PLM certainly has some sort of mechanism to continuously examine the risk analysis for the car, yet the Autopilot functionality doesn't seem to have been significantly updated to incorporate the changing risk profile. To add on all of this, why does Autopilot allow one to speed? Why can one enable Autopilot on a road such as the exemplar if it is contrary to the instructions and the car is capable of knowing the difference? What does the risk analysis say on the topic of the feature name "Autopilot" being misunderstood by the public? &c. &c.
I do wonder about the engineering processes at Tesla. I admittedly don't work in automotive, but I do work in a regulated industry, and Tesla's apparent engineering process makes me very uneasy. Risk analyses that I have done took into account that the user may not have read the instructions for use, and I struggle to understand how Tesla could not do the same.
[2] It's not clear that they have a capacitance sensor as on properly-equipped Mercedes, but Teslas allegedly can detect minute torques applied to the steering wheel as happens when the wheel is held. I recall reading this about the Teslas, but can't find a citation at the moment.
1. Can the fact that the driver is not required to maintain physical control of steering implements cause the computer algorithms to be considered legally culpable (and, by extension, Tesla) for failing to act as a driver is legally required to under Florida law?
2. Does the statements of Elon Musk et al imply that the car is fit to act as a driver in its legal responsibilities under the rules of the road (independent of any legal ability for a driver to discharge those responsibilities to the car), so that failing to, say, be able to detect an obstruction in traffic constitutes a defect such that it's violating the implicit warrant of merchantability?