I really dislike the marketing around "autopilot." The common defense of Tesla's autopilot is that there are disclaimers, and the driver should always be at 100% attention.
Well, look at this, copied from Tesla's website:
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. [1]
An average person is going to read this and think, "This care drives itself, and it's safer than me!" The rest of the page describes space-age features like switching lanes to get to an exit faster and self parking:
> All you will need to do is get in and tell your car where to go ... Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed ... and park itself.
They're selling a car that full on drives itself. This is not being sold as adaptive cruise control.
FWIW, they included a weasel phrase by saying "hardware" is there, as opposed to software, but come on! An average person is not going to understand that distinction. I think they need to dramatically scale back the hype around this feature until it actually delivers what it promises.
Edit: I'm also confused by why they need to hype up autopilot so much. They are already selling more cars than they can make, and a sexy electric car appeals to lots of people. I think it would be enough to stick with that.
I don't mind marketing puffery in the sense of "world's most innovative [X] ever," but I agree with your view that "Full Self-Driving Hardware" is dangerous, and branding the integrated system as AutoPilot™ only amplifies this.
And I mean this without any gratuitous negativity about Tesla or Elon... but branding lane-following/collision warning/adaptive-cruise control as "AutoPilot" is far more dangerous and has far more serious real-world consequences than Apple calling their backup application "Time Machine."
It is even worse since they have a good product without the hyped autopilot, when I read about Musk wanting to use only cameras then I was sure Tesla self driving is not safe.
Agreed. I rented a Model S a few weeks ago and drove it around southern California. The autopilot was a nice gimmick, but I really only used it in the stop-and-go traffic jams SoCal is famous for. There it was a godsend.
The rest of the time, the pleasure of driving such a responsive machine made the autopilot superfluous.
Humans drive with vision + the experience and intuition, so we have an intuition about the laws of physics, we know what the other cars and the pedestrians are, that they have goals and they are similar to us, with imperfect visual information we can interpolate and get a full picture fo the information, an AI would need to be intelligent as a human to be able to drive as good as a human with only 2 cameras. Having radars, maps and other extra input for the self driving car will compensate for the missing intelligence, and a good radar and some deterministic software should prevent hitting static objects.
In my country drivers are tested for eye sight, hearing,reflexes before getting the green light for the driving school(so sound is also important) and self driving cars should have mandatory tests to pass after each small update, at least to not allow DIY cars made by students to just test on public roads.
No we don't. Humans use sound for a lot of context clues (like "oh there's a motorcycle in my blind spot") and use their sense of orientation to gauge how hard they're turning (among other things).
Hopefully self-driving cars use a lot more than just cameras, too.
Add to that that Elon said "self-driving is basically a solved problem" and that they'll be doing a cross country trip soon.
I'm honestly puzzled that Musk doesn't seem to be taking this seriously and is joking about it on Twitter (he has an April fool's thing about Tesla being bankrupt). As much as I love Tesla's mission, they seem screwed to me. They just issued a massive recall, their autopolit system is killing people, their financials are terrible, and they haven't lived up to their manufacturing promises. Short sellers that have been talking about its downfall for more than a year are having a field trip now. I know that Elon seems to have this supernatural ability to bounce back, but it's hard to see how Tesla will survive this.
I think there is a lot of misunderstanding and missing context in this comment (and many of the recent Tesla threads here on HN). I'll try to add some additional perspective inline:
> Add to that that Elon said "self-driving is basically a solved problem" and that they'll be doing a cross country trip soon.
He's referring to the approach, and while perhaps a silly statement, Tesla is very clear on their web site that there is more work to do. Immediately after the pull-quotes in the parent comment, Tesla writes in bold letters: "Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction."
Nobody gets to buy Full Self Driving without seeing a litany of disclaimers that it is not ready and that the timetable is unknown (and which are presented front and center, not buried in some lengthy contract).
> I'm honestly puzzled that Musk doesn't seem to be taking this seriously
I don't think this is right. Safety has been a first order concern in every vehicle Tesla has designed. It's the thing Musk has lead with in every vehicle presentation (even though crowds continue to find it boring). And it's been repeatedly cited by Musk and others at Tesla as a major motivation behind building Autopilot, even before NHTSA confirmed it as a safety improvement. We can debate approaches and results (and hell, even motives if you really want), but it seems clear from both words and actions that Tesla takes safety extremely seriously. I don't see the connection between a short-seller April Fool's joke on Twitter and taking Autopilot safety seriously.
> They just issued a massive recall
Yes, voluntarily, for a component that Bosch makes (and for which Bosch is paying for replacements) which may be prone to corrode, and that has never caused a known accident or injury. While we debate the seriousness of this, Ford is recalling more cars than Tesla has ever produced because the steering wheels are falling off. This is barely a footnote within the industry, is certainly not an existential threat to Tesla, and IMO it speaks all the more to how seriously they take safety.
> their autopolit system is killing people
At worst, Autopilot is preventing more crashes than it is causing. Almost certainly saving more lives than it is taking although of course proving a negative is harder. One-off situations and the element of control color how we think about crashes with automated systems, but NHTSA seems to feel that Autopilot is a significant net safety improvement.
> their financials are terrible
Eh, yes and no. They're operating on a maximization function for growth, which requires a high degree of financial risk-taking. So sure, they're in a tight spot, but it's in line with where they are on a risk/reward spectrum, and while capital may be expensive for them if the market loses confidence, there doesn't appear to be any threat that they won't be able to raise it.
> and they haven't lived up to their manufacturing promises.
Yeah. This is probably the best, fairest, and most lasting criticism of Tesla. I hope they can turn that around.
I'm an average engineer (sometimes 2-3x at best) and was scammed into thinking Autopilot was safer and that Full Self-Driving was just weeks away based on that page - which back in Dec'16 actually said thoses features would begin rolling out end of Dec'16. Hopefully https://www.pacermonitor.com/public/case/21195146/Dean_Sheik... will come to fruition soon...
Except one really wants to think that all the words on the configurator page are just Tesla's legal team's forced verbiage and really wants to believe Musk and the video at https://www.tesla.com/autopilot which starts with the claim that the person is only there for legal reasons...
could be a well-meaning tech culture screw up; Most tech people know that airplane autopilots weren't end-to-end automated for the majority of their existence
most tech people also know how "better than human" sounds to the public and the average driver. This isn't well meaning tech culture, it's a PR statement that is misleading and formulated the way it is precisely because it is a sales pitch.
It's irresponsible and unethical. It should be called a 'driver assistant' and have a warning label in font size 100 that tells drivers to not treat it as autonomous and infallible.
As a reasonably self-aware dumb tech person, I would say that this defense would not fly with me any more than "any ____ should know that the product can't actually do that." There's a reason that such consumer protections are in place against claims like this, and especially with something like an auto-pilot, I think it's down-right irresponsible to allow marketing to dilute the reality of the performance. This isn't like GB vs GiB for storage, which is annoying but understandable, it's a tool that has very serious consequences if what is advertised doesn't match up with reality. Caveat Emptor only covers so much, and when you have non-stop marketing material about autonomous vehicles and auto-pilot being produced about your product (whether internally produced or produced by third parties), there is a responsibility to set out a clear expectation for the product.
There is a case that dates back to the early days of the auto industry.
tl;dr: Lady was driving a car and the wheel fell off due to a manufacturing defect. Auto makers said they weren't liable cause Caveat Emptor. Supreme court said a normal person has no way of knowing if the wheel assembly on a car is defective. Auto maker was held liable.
Defective or misleadingly advertised auto pilot? The above on steroids.
> Most tech people know that airplane autopilots weren't end-to-end automated for the majority of their existence
The Lockheed TriStar flight-control system flew a fully-automated chocks-to-chocks take-off, flight and landing in ... 1973 I think?
Anyway the comparison with aircraft autopilots is confusing. The pilots will usually couple the autopilot soon after take-off and from then it follows the commands of the Flight Management System through which the pilots interact; new waypoints, level changes etc.
Several test schemes are investigating uploading routings directly from ATC via datalink, with the pilots just having to press a button to accept.
There is also a proposal to permit TCAS to command the autopilot to prevent collisions, instead of providing advisory notice to the pilots.
I disagree. You can't blame Tesla for consumers lack of reading comprehension and /or ignorance. Same bs as with Facebook doing the news now. Everyone who looked could easily see what was going on but suddenly everyone is outraged.
I was referring to the text referenced in the parent. There is nothing misleading about it. In fact it's very informative.
The statistics around self self driving is a very complex topic. Regardless, thirst for outrage aside, even if sdc's are less safe now it's still worth pushing through as long term they will cost less lives once we get it right.
Really sad to see Autopilot is a joke and nothing more than adaptive cruise control found in common cars, despite Musk claming it was a year ago [1]
Any one who owns Autopilot knows the warning at :20 is NOT a crash warning at all and is just the usual warning every minute to jiggle the wheel if you don't hold the wheel firmly enough when it can't detect you holding it.
The bad state of the road plays a huge role. There should be a "report unsafe road conditions" function in these cars to get US roads up to Belgian standards. Doesn't Ttump want to spend money on infrastructure anyway?
This (video) is another instance where the lanes split apart but the part in the middle has absolutely no markings to indicate that you shouldn't drive there.
It's pretty common to have poorly painted roads. There are definitely some roads in my area where the lane lines are no longer discernable at all if it's raining. (Newer roads have embedded reflectors.) If self-driving cars are going to survive, they need to be able to deal with the environment we have, not the utopia where all the road lines stay well painted. ;)
UK roads are some of the safest in the world. US roads are an outlier amongst developed countries and sit with the 3rd World when it comes to road safety.
AFAIK The radar is tuned to explicitly ignore non-moving objects because it has to be able to ignore overhead road signs and other such stuff[1]. It is more tuned to changes in speed of the vehicles in front of the car.
Tesla was working with their supplier (Bosch?) to get the radar to work as a six-field array, providing more resolution than “the nearest return is 100m away”. The discussion at the time was tht higher resolution radar would remove the last advantages that lidar supposedly has over cameras.
The photo is a good example of how the lines should be painted. The video shows a well painted line on the left of the “v” but the chevrons and right side line are exceptionally faded. If I was to guess the extreme contrast probably contributed.
It's not universally the case that a white line may not be crossed. In Arizona, for example, carpool/HOV lanes have a solid white line that may be crossed at any time.
Part of me wonders whether or not we're going to see mutual improvement of infrastructure with self driving cars in the future. This seems like something that you could obviously fix by adding in some kind of passive or active flag that, to a self driving car, says "woah, you're /really/ not supposed to be here."
I know we're supposed to be focusing on building systems that are as capable as humans are as drivers. However, if we approached the problem from changing the roads with some kind of guide beacons, it would probably be really trivial to detect them and improve trustworthiness a lot more.
We already have lane markers and chevrons and all sorts of things for humans to interpret and react to. What if self driving systems got guides that were optimized to them? Instead of having really luminescent paint, they would be radio noisey or have some other electronic fingerprint that's just as "loud" as visual and audible markers are for humans.
At some point, if we want self driving cars to be the real future, we should start working the problem from both ends. When properly implemented, self driving technology is safer than normal humans -- and if we can accelerate that by adding stupid easy electronic markers to some roads, then why not?
We should also not allow software that hits a static object, the car should be able to detect solid object, should be able to detect it's speed relative to the car and to the ground, and it should detect a possible collision and take action.
You can put electronics on the roads but if the cars will depend on those then it will fail when those electronics are missing.
Not sure why this point isn't being more widely discussed, especially as it is a known problem[0] of almost all 'auto-pilot' systems based on radar.
To me it would seem to be self-driving 101, namely - don't hit a stationary object.
If the challenge is so great that current technology can't overcome the difficulties of detecting stationary objects in the path of the car (not just those stationary objects to ignore like overhead gantries) then it's time to change the language around 'autonomous' driving to ensure drivers understand the limitations. Obviously this isn't happening well enough right now as people are dying because of it.
From a personal perspective, until Lidar becomes commonplace I think I would eschew the 'autonomous' modes offered in the current generation cars.
> and if we can accelerate that by adding stupid easy electronic markers to some roads, then why not?
It's not so easy.
First, we'd need to agree on a nationwide standard for these electronic markers, since you wouldn't want to have different markers in NY and California. (It would be even nicer if a U.S.-made car would also work safely in Canada, Mexico, etc.)
Second, we'd need to pay for the markers to be purchased, installed and maintained. Interstates, state highways, county roads and city streets are the responsibilities of different levels of government, which have different budgets and different politics. I can't imagine NYC, which can't even keep its major roads free of potholes, finding the funds to add electronic markers to roads.
You'd also have to make sure to reconfigure the markers if a road was under maintenance (e.g., lanes temporarily closed, lanes running in the opposite direction, etc.).
Some early driverless vehicle systems experimented with magnets embedded in the road, so it's definitely an option to install helpers to the current infrastructure to improve the safety of the autonomous vehicles.
But who should do that? Should the government allocate budget for that? If so, which manufacturers systems they should target?
Hmm maybe tech companies should not be developing competing standards or they should form some kind of consortium and agree on some guidelines that the government can use to improve safety.
Anyway, this defeats the purpose of driverless cars though because it will turn cars into trains with personal pods instead of wagons.
Just build a mass transit system with trains then, they can be made driverless easily and probably much more efficient than running each pod with its own systems. It works fine in Europe.
That kind of lane marking (or lack thereof) seems very unsafe. I realize that often this is due to construction / etc but the immediately-to-the-left lane's marker was Very Clear, and didn't have a visible split in it. I had to watch it a second time to make sure that I didn't miss the indications that the lane was splitting away (other than the highway signs). This kind of thing (lack of clear markings) can be really dangerous when driving at night in the raid (road is more reflective than normal), or in an unfamiliar area.
The autopilot could be better, but the road engineers should have made sure the lines were painted correctly.
But in this video, the left solid white line was more visible than the right one. For whatever reason -- a trick of light, snowplows tore it up this winter, who knows.
That kind of thing is going to happen and autonomous cars need to deal with it in order to be actually autonomous.
On a tangible subject: Why do so many US highways use such a light gray colored asphalt mixture? This certainly doesn't help on the contrast with the road markings.
The car would’ve been destroyed and the driver injured. In this case, autopilot should’ve not just been better it should’ve actually worked. It didn’t even stop the car.
One thing I always wonder about is the Tesla HUD -- my gut feeling is that if instead of rendering three generic lanes, it tried to render the lanes/obstacles it saw, the driver might be more attentive while in Autopilot mode.
1. The interface changes, and changing things are more interesting to look at.
2. Seeing something like two lanes when you're on a five-lane high way should make a driver do a double-take and perhaps pay more attention or disengage autopilot.
It does matter when the article title claims that it's a reproduction of the accident. If it's not the same location, it's not really a reproduction, even if the cause/effect is the same.
I don't think the word reproduction is always used as strictly as your definition. In computer software/hardware the word is used even if the geographical location and hardware is different. I don't see using it to reproduce car issues would be any different.
If anything it's more damning since this repro shows that the behavior is not just some quirk related to the road surface or line painting at the original location.
But I still think it's fair to call it a reproduction since he showed that the car did try to steer him into the barrier. Which is a problem that the owner who died in the crash had reported to Tesla.
While Tesla screwed up the marketing on this product and failed to even ensure the driver is looking at the road at all times, I believe the black box approach (recover crash data from every incident and work it back into the product) is exceptionally powerful, as demonstrated in the airplane industry.
It use to be that whenever an accident like this happened, people shrugged their shoulders and blamed the victim, very damaging design flaws were being corrected statistically, after many wound up dead. We now have a corporation to blame, we internalize that the human drivers cannot be trusted and technology must take over and learn from each and every accident.
The safety advances this routine will bring forth cannot be overstated - and putting presure on Tesla and other manufacturers is the best way to get them.
Wow, I had to watch the road a second time to see markings of the split. In the beginning of the movie, I was expecting that there would be two lanes on the left and two lanes on the right.
Tesla this month pushed out an update which makes autopilot disengage a lot less than it used to. People have been making videos all month showing much improved performance [1], but this overconfident behavior may be a side effect of the same change.
Autopilot marketing is bad and Elon deserves a lot of the blame, but do we still expect to be able to trust these systems enough to not put hands on the wheel and look at the road?
IMO we are still in the very early stages of this work in the industry. Personally I think bugs and flaws in systems like this are normal at this stage. I would expect the same with GM Super Cruise and I wouldn't trust that either to let go of the steering wheel.
>I think bugs and flaws in systems like this are normal at this stage
I don't think you should be able to license a vehicle to drive on public roads with such "bugs and flaws".
I don't have a choice, I have to interact with public roads. My local, state, and national government owes it to me to keep experimentation off of public roads that modern life is impossible without.
The problem with robot cars is they fail in very alien ways. They will be made illegal after a small number of horrific incidents where a car will kill people in ways a licensed human never would.
I think the usage of the phrase "intelligence" in "Artificial Intelligence" needs to stop. It was cute in the 90s but today people are starting to actually believe it when it's really a vulnerable shallow approximation that is vulnerable to profoundly unintelligent failures and it will continue to be until there starts to be a debate on if a synthetic system is actually sentient.
It's a good point about whether they should be on public roads. At the same time, it's crucial to their success that they are and experience outlier situations.
The hope is that by experiencing an outlier and making a mistake, the algorithms will be able to adapt to ensure that mistake never happens again. Humans today don't have this kind of networked, massively parallel mind to avoid the mistakes of others. Again, that's the hope.
I think it'll be a long time before learning algorithms will take over the wheel completely - but I think we are at or near a time when these systems can and should let us know of low confidence, insufficient data to make a decision, or equal confidence of multiple outcomes. A networked AI should be able to tell us far enough in advance (e.g. while driving) for humans to step in and make the final call, based on their own rationalization, empathy, etc.
I don't care about a company's success if it requires endangering members of the public who have no choice in the matter.
If we were talking about, say, astronauts and test pilots who know the risks they take and make the choice, I'd have no problem with them risking their lives. We're not.
>I think it'll be a long time before learning algorithms will take over the wheel completely
They did. A system called "auto pilot" drove a man at full speed into a barrier and killed him.
I have a limited trust of other humans on the road. The deal is that I understand them. They behave in a way that makes sense to me, even (or especially) the mistakes they make. As another driver or a pedestrian, I can communicate a lot with a car just by looking at the driver and interpreting body language and attention.
> The hope is that by experiencing an outlier and making a mistake, the algorithms will be able to adapt to ensure that mistake never happens again.
I think people are making a huge underestimation about the number and types of edge cases. If you have to teach a car not to drive into a static object at full speed, the issue sin't going to be about fixing this one problem, the issue is that if this problem exists 1000 others like it do too.
I come from cold climates and country roads, I think the fact that these vehicles are being designed and tested in the bay area (or Arizona) makes them incredibly dangerous because of the fair weather bubble that the people design them in are living. Make your engineers live and work in rural northern Minnesota and maybe you'll get a bit more trust from me.
Is it the driver or the car doing the final braking in the video? Even if autopilot followed the wrong line it should perform emergency braking or evasive maneuver when the barrier is right in front of it. This is not just one error, it's two.
I assume the driver didn't want to wait and see if the car brakes at the very last moment, so it's hard to determine if the car would brake in this case.
This is the driver braking, you can hear the AP disengagement tone as the driver presses the brake pedal, and there is a distinct lack of AEB alert which should sound like this: https://youtu.be/FCegUNtgfuo
So it just fails to detect the fairly large obstacle in the road? This doesn't seem very different than the adaptive cruise control in Hondas but people somehow use it as if it's true auto pilot?
I dont own a Tesla, but have driven it and I am floored by the whatever level of Autopilot it had, but as a non-owner I still think I'd never trust the Autopilot 100% (its like when riding shotgun, I'm as attentive as the driver, or more). So, I'm not sure if the Autopilot feature is something that makes the owners trust it more, if they use it more often.
Now, Tesla doesn't claim it is Level 5 Autonomous Driving with "Autopilot". Nor does it say one can drive it handsfree.
Accidents can be much worse if someone drove hands/attention free with just the cruise control on.
In this case of the fatal accident, I'm really surprised that the driver (as per the family) knew the problem area very well, and yet was driving with the autopilot on and not paying attention.
Unless he let the Autopilot go on its own, on purpose, thinking that Model X is super safe, so a head on collision with the barrier would prove his point to Tesla, or the Autopilot became too adamant and refused to disengage, leading to the crash, not sure at what point Tesla can be held liable, if there's any.
Also, I thought those barriers are super safe with enough crumple zones. Wonder how crashing into a barrier like that could lead to such a fatal accident.
Why do the road markings trump the objects coming up?
Obviously if a giant-ass obstacle is coming up, the car should be prepared to drive around it, and to decelerate as a last resort. Moreover, if the obstacle is moving, the car has to try to extrapolate its movement in order to avoid it.
How else can you pass trucks and such?
Also why can’t these cars be tested autonomously with soft tops instead of a metal chassis, so colossions don’t really hurt anyone? Basically a pillow on wheels!!
I wonder how easy is it for Tesla to 1) find the exact problem and 2) fix it (provided it is software based). Does it mean that they first have to figure out what some "black box" neural network(s) are getting wrong and then retraining (on simulations), etc?
What if city X decides to repaint all its buses and that confuses the object detection system? How log would it take to get the (hot)fix out of the door...
I can totally see how a human driver can crash in this situation if he or she is solely focused on the lane markings. This puts the lie to Tesla and Mobileye's claims that camera-based computer vision is enough for autonomy just because humans use two low-res cameras to drive.
> This puts the lie to Tesla and Mobileye's claims that camera-based computer vision is enough for autonomy just because humans use two low-res cameras to drive.
No, it doesn't. It just means that current Tesla's implementation of camera-based computer vision is not yet good enough.
The companies are stating that they won't be using LIDAR or Radar, and that cameras are enough.
Cameras tend to rely on markers of some sort - and in cases like the markers on the road were very misleading. Even a short-sighted beginner human driver who was told to carefully follow the lanes would have made the same mistake. And cars running purely on cameras are comparably to short sighted baby human drivers.
Tesla uses radar, and are working on improving their radar to return point clouds just like lidar does.
Even then, cameras are enough for accurately mapping the world around the car, it’s just that Tesla’s state of the art has not matched the academic proofs of concept that have been demonstrated over the last few years.
I’ve driven this road many times and it’s pretty obvious where to go. The lanes split into 2 different roads and you see the elevated way turning right in the distance so turning left would never make sense.
I wouldn’t be surprised if the chevrons which are pretty worn down would be low priority on the maintenance list given how obvious it is and how annoying work there would be.
What a terrible video till.e Also, why are people recording with their phones while driving (or in this case letting drive)? This doesn't look like a dashcam
Fun fact - given the current implementation of Autopilot, it is insane to expect an autonomous vehicle. There should be some kind of common sense/intelligence test for would-be users of Autopilot. So far, every incident involving Autopilot that I have heard about or looked at can be summarised with "Driver was being a dumbass"
The word autopilot doesn't imply autonomy. Standard adaptive cruise control alome is more powerful than any plane autopilot system (which is the origin of the word).
If all the PR and the CEO keep saying “autopilot”, users will use it as auto pilot. If it isn’t autopilot, it should have a different name.
The word "autopilot" originated in aviation, where it is a system used to control the trajectory of an aircraft without constant 'hands-on' control by a human operator being required. Autopilots do not replace human operators, but instead they assist them in controlling the aircraft.[1]
Most aviation professionals understand those limitations. You're complaining that the average Tesla owner ascribes a different meaning to that word?
Sadly, in aviation, over-reliance on autopilot has also led to very tragic results, the most prominent recent example being the crash of AF447, which killed 228 people.[2]
Musk is doing nothing worse than Airbus, which has pushed automation to the extent that many of their pilots no longer understand the basic principles involved in aviation. They aren't pilots, they're operators of a very complex machine which has a UX that is usable in benign conditions, but that is abominably bad under difficult conditions.
When faced with "temporary inconsistencies" from various sensors, the AF447 autopilot gave up and returned control to the pilots. But the pilots' inexperience in actually "flying" the aircraft, together with the awful Airbus UX, caused one pilot to say "We've lost all control of the aeroplane we don’t understand anything we’ve tried everything".
In reality AF447 was a perfectly flyable aircraft. "pitch and power" was all it needed: Holding the aircraft level with pitch at IIRC 6 degrees above the horizon and applying 85% power was all that was necessary.
> without constant 'hands-on' control by a human operator being required.
This is exactly what the Tesla autopilot isn’t. It requires constant hands-on control. It in fact goes as far as flashing alarms when your hands are not on.
It requires constant hands on steering wheel, but not constant control (that's just regular driving) - only readiness, just like in aviation. The difference is that you have more time to react when you're flying so hands on controls are pointless.
Fun fact - If you defend your system as "Driver is responsible for every accident, even if the car drives full speed into a static obstacle", then your system will never be summarized as "at fault".
Well, look at this, copied from Tesla's website:
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. [1]
An average person is going to read this and think, "This care drives itself, and it's safer than me!" The rest of the page describes space-age features like switching lanes to get to an exit faster and self parking:
> All you will need to do is get in and tell your car where to go ... Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed ... and park itself.
They're selling a car that full on drives itself. This is not being sold as adaptive cruise control.
FWIW, they included a weasel phrase by saying "hardware" is there, as opposed to software, but come on! An average person is not going to understand that distinction. I think they need to dramatically scale back the hype around this feature until it actually delivers what it promises.
Edit: I'm also confused by why they need to hype up autopilot so much. They are already selling more cars than they can make, and a sexy electric car appeals to lots of people. I think it would be enough to stick with that.
[1] https://www.tesla.com/autopilot