Will the Tesla Model three Be the Very first Truly Self-Driving Car?
Elon Musk has said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Tesla Model 3’s planned release date.
On the evening of March 31st, Elon Musk unveiled Tesla’s sinuous Model Trio, the company’s very first “affordable” electric-car model. After touting the sedan’s punchy acceleration, two-hundred-and-fifteen-mile battery range, and sweeping, seamless glass roof, he mentioned its base price of thirty-five thousand dollars and told the audience that prospective buyers had already reserved more than a hundred and fifteen thousand of the vehicles, to rapturous applause and shouts of “You did it!” Not one to miss a marketing trick , Musk capped the night on Twitter, with a cryptic thank-you message that promised more: “Thanks for tuning in to the Model three unveil Part 1! Part two is super next level, but that’s for later . . . .”
Within hours, the tech community was awash in speculation about what more Tesla could have in store for the Model Trio. Some wondered, specifically, whether it would be the world’s very first mass-market, fully autonomous self-driving car. Spurred forward by Google and other Silicon Valley companies, the auto industry has been tinkering with autonomous vehicles for years. Tesla has demonstrated a unique appetite for risk, however, by equipping two of its cars, the Model S and Model X, with rudimentary self-driving capabilities. Musk has also said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Model 3’s planned release date. And, after the announcement, Bloomberg’s Tom Randall noted that Musk kept referring to the Model 3’s “steering system” or “steering controls,” rather than its steering wheel, and that he’d said that the system shown at the Model three launch wasn’t final. But perhaps the most compelling evidence that Tesla’s Model three may have significant autonomous capabilities lies in the company’s unique technological treatment, which could permit it to achieve a “hands off” driving practice at a fraction of the cost of its competitors—including Google, the heavyweight in the field. (A disclosure: one of the authors, Levi Tillemann, wields stock in Tesla.)
To understand why Tesla’s strategy may give it a decisive advantage, it helps to know the history of its main competitor’s treatment. As Burkhard Bilger detailed in this magazine in 2013, the research began with a puny group of Bay Area roboticists, led by Google’s Anthony Levandowski and Sebastian Thrun, who bought a Prius, rejiggered its electronic controls, and installed a laser-based system, known as lidar (for light detection and ranging), that measures the physical distance inbetween the car and the objects around it. At very first, they worked on the vehicle as a side project. But then Google brought the endeavor in-house and gave it a basically unlimited budget.
There’s a very good reason Google and other manufacturers have bet on lidar: by repeatedly firing pulses of laser light in all directions and timing their comeback, lidar can create a precise three-dimensional map of the car’s surroundings, accurate down to a duo of centimetres. This precision permitted the Google car to progress much quicker than most people expected. (Google demonstrated these advances in a two thousand twelve YouTube movie featuring Steve Mahan, who is legally blind, as he went about his daily routine in an autonomous Google car.)
But while much of the tech community marvelled at Google’s achievement, Musk thought that lidar was too expensive. (A top-notch system costs about eighty thousand dollars.) He believed that Tesla could get the same results with a cheaper suite of sensors. Musk very first exposed that Tesla was considering adding autonomous-driving features in 2013. Tesla’s ultimate strategy was to bypass lidar with a combination of elementary cameras, radar (which uses radio swings to estimate distances to objects that are further away), and ultrasound (which uses sound swings to estimate the distance to objects in the instant neighborhood). The cameras, which aren’t much different from the ones used in smartphones, produce a movie stream that is then analyzed by algorithms trained to recognize objects. Tesla’s key playmate for the camera system is Mobileye, an Israeli computer-vision company which claims that its software can detect vehicles up to two hundred and thirty feet away using a single standard-resolution camera—all while adding only a thousand dollars to the price of a car.
In October of 2014, Tesla began suggesting its Model S and X customers a “technology package,” which included this sensor array and cost about four thousand dollars. The equipment permitted the company to record drivers’ movements, unless they opted out of the tracking, and—most important—to embark amassing an enormous trove of data. A year later, it remotely activated its “Autopilot” software on ems of thousands of these cars. All of a sudden, drivers had the capability to engage some limited autonomous functions, including dynamic cruise control (pegging your car’s speed to the speed of the car in front of you), course alignment inwards highway lanes, and on-command lane-changing. Some drivers were unnerved by the Autopilot functions, and cars from time to time swerved or drove off the road . But many of Tesla’s tech-tolerant early adopters relished the fresh features.
Autopilot also gave Tesla access to ems of thousands of “expert trainers,” as Musk called them . When these de-facto test drivers overrode the system, Tesla’s sensors and learning algorithms took special note. The company has used its growing data set to continually improve the autonomous-driving practice for Tesla’s entire fleet. By late 2015, Tesla was gathering about a million miles’ worth of driving data every day.
To understand how commanding a lead this gives the company in the race for real-world autonomous-driving data, consider the comparably petite number of lidar-based autonomous vehicles—all of them test cars—that some of its competitors have on the road. California, where much of the research on self-driving cars is taking place, requires companies to register their autonomous vehicles, so we know that presently Nissan has just four such cars on the road in the state, while Mercedes has five. Google has almost eighty registered in the state (tho’ not all of them are in service); it is also doing limited testing in Arizona, Texas, and Washington. Ford announced earlier this year that it was adding twenty fresh cars to its test fleet, providing it thirty vehicles on the road in Arizona, California, and Michigan, which it says is the largest fleet of any traditional automaker. By comparison, Tesla has sold toughly thirty-five thousand cars in the U.S. since October of 2014. The quality of the data that these vehicles are producing is unlikely to be as rich as the information the lidar cars are providing, but Tesla’s vastly superior fleet size means that its autonomous cars can rack up as much driving practice every day or two as Google’s cars have cumulatively .
With its treatment, Tesla has been making a classic information-age wager: that software and processing can hammer hardware. Lidar systems will no doubt proceed to get cheaper: in February, Velodyne, the world’s preëminent civilian lidar company, announced the release of a pared-down system costing about eight thousand dollars. But, even so, it will take time for suppliers to ramp up production to the point that carmakers can reliably buy lidar devices at a lower cost and deploy them widely. Software, by contrast, can be improved cheaply and continually, and can be updated remotely in an entire fleet of vehicles overnight. The genius of this system is that Tesla doesn’t have to determine up front whether the Model three will be self-driving—it can just install the necessary hardware and make the cars autonomous at a later date.
This could prove particularly significant given the profound regulatory and liability challenges that lie ahead for any company looking to create a fully autonomous car. The U.S. Department of Transportation classifies autonomous vehicles on a scale of zero (no autonomous functions) to four (no driver necessary). For both regulatory and technological reasons, Musk’s fantasy of drivers fetching their autonomous cars from across the country by two thousand eighteen seems unlikely. Much less far-fetched is the idea that the Model three will be capable of total autonomy, but in limited settings—which would make it a level-three vehicle under the federal system.
Such a car would require some degree of monitoring by a driver, but under the right conditions—say, highway driving—it would permit people to mostly disengage from the task of driving in order to read, check e-mail, or observe a movie. (Some Model S drivers report being able to do so already, tho’ from a safety standpoint they truly shouldn’t.) Fully autonomous highway driving would only be an interstitial step for city dwellers, but it would be very popular with suburban commuters—and it will be a major technological coup if Tesla hits Google, Nissan, and GM in the race to produce. That said, from a commercial standpoint, the Model three is poised for massive success, regardless: more than three hundred and twenty-five thousand consumers have now placed deposits, amounting to fourteen billion dollars in potential sales. Autonomous or not, delivering that volume of vehicles on time, on budget, and on spec will be a challenge unto itself.
Levi Tillemann is the author of ”The Fine Race: The Global Quest for the Car of the Future,” a managing playmate at Valence Strategic, and a fellow at Fresh America.
Will the Tesla Model three Be the Very first Truly Self-Driving Car, The Fresh Yorker
Will the Tesla Model three Be the Very first Truly Self-Driving Car?
Elon Musk has said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Tesla Model 3’s planned release date.
On the evening of March 31st, Elon Musk unveiled Tesla’s sinuous Model Three, the company’s very first “affordable” electric-car model. After touting the sedan’s punchy acceleration, two-hundred-and-fifteen-mile battery range, and sweeping, seamless glass roof, he mentioned its base price of thirty-five thousand dollars and told the audience that prospective buyers had already reserved more than a hundred and fifteen thousand of the vehicles, to rapturous applause and shouts of “You did it!” Not one to miss a marketing trick , Musk capped the night on Twitter, with a cryptic thank-you message that promised more: “Thanks for tuning in to the Model three unveil Part 1! Part two is super next level, but that’s for later . . . .”
Within hours, the tech community was awash in speculation about what more Tesla could have in store for the Model Trio. Some wondered, specifically, whether it would be the world’s very first mass-market, fully autonomous self-driving car. Spurred forward by Google and other Silicon Valley companies, the auto industry has been tinkering with autonomous vehicles for years. Tesla has demonstrated a unique appetite for risk, however, by equipping two of its cars, the Model S and Model X, with rudimentary self-driving capabilities. Musk has also said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Model 3’s planned release date. And, after the announcement, Bloomberg’s Tom Randall noted that Musk kept referring to the Model 3’s “steering system” or “steering controls,” rather than its steering wheel, and that he’d said that the system shown at the Model three launch wasn’t final. But perhaps the most compelling evidence that Tesla’s Model three may have significant autonomous capabilities lies in the company’s unique technological treatment, which could permit it to achieve a “hands off” driving practice at a fraction of the cost of its competitors—including Google, the heavyweight in the field. (A disclosure: one of the authors, Levi Tillemann, possesses stock in Tesla.)
To understand why Tesla’s strategy may give it a decisive advantage, it helps to know the history of its main competitor’s treatment. As Burkhard Bilger detailed in this magazine in 2013, the research began with a petite group of Bay Area roboticists, led by Google’s Anthony Levandowski and Sebastian Thrun, who bought a Prius, rejiggered its electronic controls, and installed a laser-based system, known as lidar (for light detection and ranging), that measures the physical distance inbetween the car and the objects around it. At very first, they worked on the vehicle as a side project. But then Google brought the endeavor in-house and gave it a basically unlimited budget.
There’s a very good reason Google and other manufacturers have bet on lidar: by repeatedly firing pulses of laser light in all directions and timing their come back, lidar can create a precise three-dimensional map of the car’s surroundings, accurate down to a duo of centimetres. This precision permitted the Google car to progress much quicker than most people expected. (Google demonstrated these advances in a two thousand twelve YouTube movie featuring Steve Mahan, who is legally blind, as he went about his daily routine in an autonomous Google car.)
But while much of the tech community marvelled at Google’s achievement, Musk thought that lidar was too expensive. (A top-notch system costs about eighty thousand dollars.) He believed that Tesla could get the same results with a cheaper suite of sensors. Musk very first exposed that Tesla was considering adding autonomous-driving features in 2013. Tesla’s ultimate strategy was to bypass lidar with a combination of elementary cameras, radar (which uses radio swings to estimate distances to objects that are further away), and ultrasound (which uses sound swings to estimate the distance to objects in the instant neighborhood). The cameras, which aren’t much different from the ones used in smartphones, produce a movie stream that is then analyzed by algorithms trained to recognize objects. Tesla’s key playmate for the camera system is Mobileye, an Israeli computer-vision company which claims that its software can detect vehicles up to two hundred and thirty feet away using a single standard-resolution camera—all while adding only a thousand dollars to the price of a car.
In October of 2014, Tesla began suggesting its Model S and X customers a “technology package,” which included this sensor array and cost about four thousand dollars. The equipment permitted the company to record drivers’ movements, unless they opted out of the tracking, and—most important—to commence amassing an enormous trove of data. A year later, it remotely activated its “Autopilot” software on ems of thousands of these cars. All of a sudden, drivers had the capability to engage some limited autonomous functions, including dynamic cruise control (pegging your car’s speed to the speed of the car in front of you), course alignment inwards highway lanes, and on-command lane-changing. Some drivers were unnerved by the Autopilot functions, and cars from time to time swerved or drove off the road . But many of Tesla’s tech-tolerant early adopters relished the fresh features.
Autopilot also gave Tesla access to ems of thousands of “expert trainers,” as Musk called them . When these de-facto test drivers overrode the system, Tesla’s sensors and learning algorithms took special note. The company has used its growing data set to continually improve the autonomous-driving practice for Tesla’s entire fleet. By late 2015, Tesla was gathering about a million miles’ worth of driving data every day.
To understand how commanding a lead this gives the company in the race for real-world autonomous-driving data, consider the comparably petite number of lidar-based autonomous vehicles—all of them test cars—that some of its competitors have on the road. California, where much of the research on self-driving cars is taking place, requires companies to register their autonomous vehicles, so we know that presently Nissan has just four such cars on the road in the state, while Mercedes has five. Google has almost eighty registered in the state (however not all of them are in service); it is also doing limited testing in Arizona, Texas, and Washington. Ford announced earlier this year that it was adding twenty fresh cars to its test fleet, providing it thirty vehicles on the road in Arizona, California, and Michigan, which it says is the largest fleet of any traditional automaker. By comparison, Tesla has sold toughly thirty-five thousand cars in the U.S. since October of 2014. The quality of the data that these vehicles are producing is unlikely to be as rich as the information the lidar cars are providing, but Tesla’s vastly superior fleet size means that its autonomous cars can rack up as much driving practice every day or two as Google’s cars have cumulatively .
With its treatment, Tesla has been making a classic information-age wager: that software and processing can hammer hardware. Lidar systems will no doubt proceed to get cheaper: in February, Velodyne, the world’s preëminent civilian lidar company, announced the release of a pared-down system costing about eight thousand dollars. But, even so, it will take time for suppliers to ramp up production to the point that carmakers can reliably buy lidar devices at a lower cost and deploy them widely. Software, by contrast, can be improved cheaply and continually, and can be updated remotely in an entire fleet of vehicles overnight. The genius of this system is that Tesla doesn’t have to determine up front whether the Model three will be self-driving—it can just install the necessary hardware and make the cars autonomous at a later date.
This could prove particularly significant given the profound regulatory and liability challenges that lie ahead for any company looking to create a fully autonomous car. The U.S. Department of Transportation classifies autonomous vehicles on a scale of zero (no autonomous functions) to four (no driver necessary). For both regulatory and technological reasons, Musk’s fantasy of drivers fetching their autonomous cars from across the country by two thousand eighteen seems unlikely. Much less far-fetched is the idea that the Model three will be capable of utter autonomy, but in limited settings—which would make it a level-three vehicle under the federal system.
Such a car would require some degree of monitoring by a driver, but under the right conditions—say, highway driving—it would permit people to mostly disengage from the task of driving in order to read, check e-mail, or witness a movie. (Some Model S drivers report being able to do so already, tho’ from a safety standpoint they indeed shouldn’t.) Fully autonomous highway driving would only be an interstitial step for city dwellers, but it would be very popular with suburban commuters—and it will be a major technological coup if Tesla hits Google, Nissan, and GM in the race to supply. That said, from a commercial standpoint, the Model three is poised for massive success, regardless: more than three hundred and twenty-five thousand consumers have now placed deposits, amounting to fourteen billion dollars in potential sales. Autonomous or not, delivering that volume of vehicles on time, on budget, and on spec will be a challenge unto itself.
Levi Tillemann is the author of ”The Excellent Race: The Global Quest for the Car of the Future,” a managing playmate at Valence Strategic, and a fellow at Fresh America.
Will the Tesla Model three Be the Very first Truly Self-Driving Car, The Fresh Yorker
Will the Tesla Model three Be the Very first Truly Self-Driving Car?
Elon Musk has said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Tesla Model 3’s planned release date.
On the evening of March 31st, Elon Musk unveiled Tesla’s sinuous Model Three, the company’s very first “affordable” electric-car model. After touting the sedan’s punchy acceleration, two-hundred-and-fifteen-mile battery range, and sweeping, seamless glass roof, he mentioned its base price of thirty-five thousand dollars and told the audience that prospective buyers had already reserved more than a hundred and fifteen thousand of the vehicles, to rapturous applause and shouts of “You did it!” Not one to miss a marketing trick , Musk capped the night on Twitter, with a cryptic thank-you message that promised more: “Thanks for tuning in to the Model three unveil Part 1! Part two is super next level, but that’s for later . . . .”
Within hours, the tech community was awash in speculation about what more Tesla could have in store for the Model Three. Some wondered, specifically, whether it would be the world’s very first mass-market, fully autonomous self-driving car. Spurred forward by Google and other Silicon Valley companies, the auto industry has been tinkering with autonomous vehicles for years. Tesla has demonstrated a unique appetite for risk, however, by equipping two of its cars, the Model S and Model X, with rudimentary self-driving capabilities. Musk has also said that people should be able to summon their cars from across the country by the beginning of 2018, which happens to coincide with the Model 3’s planned release date. And, after the announcement, Bloomberg’s Tom Randall noted that Musk kept referring to the Model 3’s “steering system” or “steering controls,” rather than its steering wheel, and that he’d said that the system shown at the Model three launch wasn’t final. But perhaps the most compelling evidence that Tesla’s Model three may have significant autonomous capabilities lies in the company’s unique technological treatment, which could permit it to achieve a “hands off” driving practice at a fraction of the cost of its competitors—including Google, the heavyweight in the field. (A disclosure: one of the authors, Levi Tillemann, wields stock in Tesla.)
To understand why Tesla’s strategy may give it a decisive advantage, it helps to know the history of its main competitor’s treatment. As Burkhard Bilger detailed in this magazine in 2013, the research began with a petite group of Bay Area roboticists, led by Google’s Anthony Levandowski and Sebastian Thrun, who bought a Prius, rejiggered its electronic controls, and installed a laser-based system, known as lidar (for light detection and ranging), that measures the physical distance inbetween the car and the objects around it. At very first, they worked on the vehicle as a side project. But then Google brought the endeavor in-house and gave it a basically unlimited budget.
There’s a very good reason Google and other manufacturers have bet on lidar: by repeatedly firing pulses of laser light in all directions and timing their come back, lidar can create a precise three-dimensional map of the car’s surroundings, accurate down to a duo of centimetres. This precision permitted the Google car to progress much swifter than most people expected. (Google demonstrated these advances in a two thousand twelve YouTube movie featuring Steve Mahan, who is legally blind, as he went about his daily routine in an autonomous Google car.)
But while much of the tech community marvelled at Google’s achievement, Musk thought that lidar was too expensive. (A top-notch system costs about eighty thousand dollars.) He believed that Tesla could get the same results with a cheaper suite of sensors. Musk very first exposed that Tesla was considering adding autonomous-driving features in 2013. Tesla’s ultimate strategy was to bypass lidar with a combination of plain cameras, radar (which uses radio flaps to estimate distances to objects that are further away), and ultrasound (which uses sound swings to estimate the distance to objects in the instant surroundings). The cameras, which aren’t much different from the ones used in smartphones, produce a movie stream that is then analyzed by algorithms trained to recognize objects. Tesla’s key fucking partner for the camera system is Mobileye, an Israeli computer-vision company which claims that its software can detect vehicles up to two hundred and thirty feet away using a single standard-resolution camera—all while adding only a thousand dollars to the price of a car.
In October of 2014, Tesla began suggesting its Model S and X customers a “technology package,” which included this sensor array and cost about four thousand dollars. The equipment permitted the company to record drivers’ movements, unless they opted out of the tracking, and—most important—to begin amassing an enormous trove of data. A year later, it remotely activated its “Autopilot” software on ems of thousands of these cars. Abruptly, drivers had the capability to engage some limited autonomous functions, including dynamic cruise control (pegging your car’s speed to the speed of the car in front of you), course alignment inwards highway lanes, and on-command lane-changing. Some drivers were unnerved by the Autopilot functions, and cars sometimes swerved or drove off the road . But many of Tesla’s tech-tolerant early adopters relished the fresh features.
Autopilot also gave Tesla access to ems of thousands of “expert trainers,” as Musk called them . When these de-facto test drivers overrode the system, Tesla’s sensors and learning algorithms took special note. The company has used its growing data set to continually improve the autonomous-driving practice for Tesla’s entire fleet. By late 2015, Tesla was gathering about a million miles’ worth of driving data every day.
To understand how commanding a lead this gives the company in the race for real-world autonomous-driving data, consider the comparably petite number of lidar-based autonomous vehicles—all of them test cars—that some of its competitors have on the road. California, where much of the research on self-driving cars is taking place, requires companies to register their autonomous vehicles, so we know that presently Nissan has just four such cars on the road in the state, while Mercedes has five. Google has almost eighty registered in the state (however not all of them are in service); it is also doing limited testing in Arizona, Texas, and Washington. Ford announced earlier this year that it was adding twenty fresh cars to its test fleet, providing it thirty vehicles on the road in Arizona, California, and Michigan, which it says is the largest fleet of any traditional automaker. By comparison, Tesla has sold toughly thirty-five thousand cars in the U.S. since October of 2014. The quality of the data that these vehicles are producing is unlikely to be as rich as the information the lidar cars are providing, but Tesla’s vastly superior fleet size means that its autonomous cars can rack up as much driving practice every day or two as Google’s cars have cumulatively .
With its treatment, Tesla has been making a classic information-age wager: that software and processing can hit hardware. Lidar systems will no doubt proceed to get cheaper: in February, Velodyne, the world’s preëminent civilian lidar company, announced the release of a pared-down system costing about eight thousand dollars. But, even so, it will take time for suppliers to ramp up production to the point that carmakers can reliably buy lidar devices at a lower cost and deploy them widely. Software, by contrast, can be improved cheaply and continually, and can be updated remotely in an entire fleet of vehicles overnight. The genius of this system is that Tesla doesn’t have to determine up front whether the Model three will be self-driving—it can just install the necessary hardware and make the cars autonomous at a later date.
This could prove particularly significant given the profound regulatory and liability challenges that lie ahead for any company looking to create a fully autonomous car. The U.S. Department of Transportation classifies autonomous vehicles on a scale of zero (no autonomous functions) to four (no driver necessary). For both regulatory and technological reasons, Musk’s fantasy of drivers fetching their autonomous cars from across the country by two thousand eighteen seems unlikely. Much less far-fetched is the idea that the Model three will be capable of utter autonomy, but in limited settings—which would make it a level-three vehicle under the federal system.
Such a car would require some degree of monitoring by a driver, but under the right conditions—say, highway driving—it would permit people to mostly disengage from the task of driving in order to read, check e-mail, or see a movie. (Some Model S drivers report being able to do so already, tho’ from a safety standpoint they indeed shouldn’t.) Fully autonomous highway driving would only be an interstitial step for city dwellers, but it would be very popular with suburban commuters—and it will be a major technological coup if Tesla hammers Google, Nissan, and GM in the race to supply. That said, from a commercial standpoint, the Model three is poised for massive success, regardless: more than three hundred and twenty-five thousand consumers have now placed deposits, amounting to fourteen billion dollars in potential sales. Autonomous or not, delivering that volume of vehicles on time, on budget, and on spec will be a challenge unto itself.
Levi Tillemann is the author of ”The Excellent Race: The Global Quest for the Car of the Future,” a managing playmate at Valence Strategic, and a fellow at Fresh America.