In his Master Plan, Part Deux, Tesla CEO Elon Musk spoke about the company’s goal to “develop a self-driving capability that is 10 X safer than manual via massive fleet learning.” A key advantage that Tesla holds over its competitors in the race for autonomous driving vehicles is its ability to tap its nearly hundreds of thousands of vehicles for data through Autopilot. Musk confirmed:

“The whole Tesla fleet operates as a network. When one car learns something, they all learn it. That is beyond what other car companies are doing.”

Fleet learning data informs the carmaker how to improve self-driving capacity. Each software improvement receives extensive internal validation before it reaches any customers. Musk notes that this system is named “Autopilot” after an airplane’s autopilot. He explained in “Master Plan, Part Deux” that worldwide regulatory approval would:

“require something on the order of 6 billion miles (10 billion km). Current fleet learning is happening at just over 3 million miles (5 million km) per day.”


What is “fleeting learning?”

The concept of fleet learning is fascinating. Perhaps you adhere to the pragmatic explanation that it is little more than crowdsourced mapping data. Then again, you might accept that computer intelligence is self-reinforcing, so that independent learning accrues within software and then is accommodated and distributed to other vehicles in the fleet, creating a cycle that continually feeds itself, improves, and recommunicates new data insights.

The closest definition of fleet learning is likely a combination of both — and a bit more. Information collected is invariant, or a condition that can be relied upon to be true during execution of a particular act; it is a logical assertion that is held to always be true during a certain phase of execution. In Tesla’s case, mapping data with features such as driving lanes and road signs are not expected to change. This allows the system to self-validate features once a threshold number of cars report it.

Significant refinements of the Version 8 software included upgrades to the Autopilot in which a more advanced signal processing created a picture of the world using the onboard radar. While radar had been added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, it was intended only as a supplementary sensor to the primary camera and image processing system.

Here’s an example of how fleet learning “comes in handy,” according to the September, 2016 Tesla blog post. The vehicle maps the world according to radar, noting the position of road signs, bridges, and other stationary objects.

“The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.”

Tesla used  “Shadow Mode” in Autopilot vehicles, which would not take any driving-assist or self-driving actions. Rather, they logged instances when Autopilot would have taken action and compared those results to the real-life actions taken by human drivers. The goal was to improve its self-driving algorithms until they exceeded human capabilities. Having statistical data to back up the safety of its self-driving model, Tesla would be well positioned to persuade regulators that its vision for a Tesla-powered autonomous future will be safer for humanity.

On January 9, 2017, Musk announced that an upcoming planned software update for Autopilot would require one more week’s worth of fleet data.

“New [revision] for [hardware 2] Autopilot rolling out Mon to first 1000 [vehicles] & to rest of fleet in shadow mode. Also improves hardware 1 and enables Ludicrous+.”

Essential to Tesla (and any computer software) is training the network by dividing data into, most commonly, two datasets. The first is applied to training the network itself, and the second is a separate mechanism to assess network performance. In lay terms, the goal is to use test data to give a more full and accurate indication of likely real-world system performance.

Validation of fleeting learning from federal investigators

Indeed, that network training and application to fleet learning seems to have paid off, as on January 19 a federal investigation into a fatal accident involving a Tesla using its Autopilot system was closed. The U.S. National Highway Traffic Safety Administration focused on four areas as part of its investigation: automatic emergency braking, how drivers interacted with Autopilot, data from crash incidents involving Tesla vehicles, and changes the company made to its systems.

The agency found data that supported Musk’s assertions that Autopilot is preventing accidents and even saving lives. Investigators found that Tesla vehicles crash rate dropped by almost 40% after Autosteer — one component of the Autopilot system — became available as part of fleet learning. The Tesla update blog offered a quick and efficient response.

“At Tesla, the safety of our customers comes first, and we appreciate the thoroughness of NHTSA’s report and its conclusion.”

Tesla’s continued Autopilot updates highlight Tesla’s diligence in over-the-air updates and fleet learning — two areas where the automaker stands apart from the competition.

Photo credit: automobileitalia via / CC BY