In shadow testing, a car is being driven by a human or a human with autopilot. A new revision of the autopilot software is also present on the vehicle, receiving data from the sensors but not taking control of the car in any way. Rather, it makes decisions about how to drive based on the sensors, and those decisions can be compared to the decisions of a human driver or the older version of the autopilot. If there is a decision — the new software decides to zig where the old one zags, or the new software cruises on when the human hits the brakes, an attempt can be made to figure out how different the decisions were, and how important that difference is. Some portion of those incidents can be given to human beings to examine and learn if the new software is making a mistake. If there is a mistake, it can be marked to be fixed, and the testing continues.
User Input is an error:
Elon Musk views any human user intervention is an error situation for the Tesla Autopilot. Elon means that whenever a human has to take control from the Tesla Autopilot system this is indicating an error that must be fixed for a future fully autonomous car.
Teslas improve with use:
Most of the systems we currently use aren’t built to improve through use. They have locked in performance and capabilities. These systems can only improve through revisions and patches made by technical experts. That approach is on the way out. Systems can now be improved operationally …. Further, for the most complex activities, this will be the only type of system you will be able to buy.
Let me guess, the media won’t be falling over themselves to report on these instances where the tesla autopilot saved lives.
Doctors told Neally later that he’d suffered a pulmonary embolism. They told him he was lucky to have survived. If you ask Neally, however, he’ll tell you he was lucky to be driving a Tesla. As he writhed in the driver’s seat, the vehicle’s software negotiated 30 highway km to a hospital just off an exit ramp. He manually steered it into the parking lot and checked himself into the emergency room, where he was promptly treated. By night’s end he had recovered enough to go home.
Another analysis on the Tesla software disruption:
Tesla’s first bet is that it will solve the vision-only problem before the other sensors get small and cheap, and that it will solve all the rest of the autonomy problems by then as well. This is strongly counter-consensus. It hopes to do it the harder way before anyone else does it the easier way. That is, it’s entirely possible that Waymo, or someone else, gets autonomy to work in 202x with a $1000 or $2000 LIDAR and vision sensor suite and Tesla still doesn’t have it working with vision alone.
The second bet is that Tesla will be able to get autonomy working with enough of a lead to benefit from a strong winner takes all effect – ‘more cars means more data means better autonomy means more cars’. After all, even if Tesla did get the vision-only approach working, it doesn’t necessarily follow that no-one else would. Hence, the bet is that autonomous capability will not be a commodity.
This video from 2014 is what happens when you improve cars at the speed of the software industry. very very impressive.
Being able to update the fleet isn’t just useful for selfdriving
Researchers Hacked a Model S, But Tesla’s Already Released a Patch If you were CEO of a car manufacturer, which of these headlines would you rather were written about you? The first speaks of a tired, old manufacturing model where fixes take months and involve expense and inconvenience. The second speaks of a nimble model more reminiscent of a smartphone than a car