Auto Correct
The Google car knows every turn. It never gets sleepy or dissipated, or wonders who has the right-of-way.
Human beings make terrible drivers. They talk on the phone and run crimson lights, signal to the left and turn to the right. They drink too much beer and plow into trees or veer into traffic as they swat at their kids. They have blind catches sight of, gam cramps, seizures, and heart attacks. They rubberneck, hotdog, and take pity on turtles, cause fender benders, pileups, and head-on collisions. They nod off at the wheel, wrestle with maps, fiddle with knobs, have marital spats, take the curve too late, take the curve too hard, spill coffee in their laps, and roll over their cars. Of the ten million accidents that Americans are in every year, nine and a half million are their own damn fault.
A case in point: The driver in the lane to my right. He’s twisted halfway around in his seat, taking a picture of the Lexus that I’m railing in with an engineer named Anthony Levandowski. Both cars are heading south on Highway eight hundred eighty in Oakland, going more than seventy miles an hour, yet the man takes his time. He holds his phone up to the window with both forearms until the car is framed just so. Then he snaps the picture, checks it onscreen, and taps out a lengthy text message with his thumbs. By the time he puts his mitts back on the wheel and glances up at the road, half a minute has passed.
Levandowski jiggles his head. He’s used to this sort of thing. His Lexus is what you might call a custom-built model. It’s surmounted by a spinning laser turret and knobbed with cameras, radar, antennas, and G.P.S. It looks a little like an ice-cream truck, lightly weaponized for inner-city work. Levandowski used to tell people that the car was designed to pursue tornadoes or to track mosquitoes, or that he belonged to an élite team of ghost hunters. But nowadays the vehicle is clearly marked: “Self-Driving Car.”
Every week for the past year and a half, Levandowski has taken the Lexus on the same slightly surreal commute. He leaves his house in Berkeley at around eight o’clock, sways goodbye to his fiancée and their son, and drives to his office in Mountain View, forty-three miles away. The rail takes him over surface streets and freeways, old salt flats and pine-green foothills, across the gusty blue of San Francisco Bay, and down into the heart of Silicon Valley. In rush-hour traffic, it can take two hours, but Levandowski doesn’t mind. He thinks of it as research. While other drivers are gawking at him, he is observing them: recording their maneuvers in his car’s sensor logs, analyzing traffic flow, and flagging any problems for future review. The only tiresome part is when there’s roadwork or an accident ahead and the Lexus insists that he take the wheel. A chime sounds, pleasant yet insistent, then a warning emerges on his dashboard screen: “In one mile, prepare to resume manual control.”
Levandowski is an engineer at Google X, the company’s semi-secret lab for experimental technology. He turned thirty-three last March but still has the spindly build and nerdy good nature of the kids in my high-school science club. He wears black framework glasses and oversized neon sneakers, has a long, loping stride—he’s six feet seven—and is given to excitable talk on fantastical themes. Cybernetic dolphins! Self-harvesting farms! Like a lot of his colleagues in Mountain View, Levandowski is equal parts idealist and voracious capitalist. He wants to fix the world and make a fortune doing it. He comes by these impulses honestly: his mother is a French diplomat, his father an American businessman. Albeit Levandowski spent most of his childhood in Brussels, his English has no accent aside from a certain absence of inflection—the bright, electrified chatter of a processor in overdrive. “My fiancée is a dancer in her soul,” he told me. “I’m a robot.”
What separates Levandowski from the nerds I knew is this: his wacky ideas tend to come true. “I only do cool shit,” he says. As a freshman at Berkeley, he launched an intranet service out of his basement that earned him fifty thousand dollars a year. As a sophomore, he won a national robotics competition with a machine made out of Legos that could sort Monopoly money—a fair analogy for what he’s been doing for Google lately. He was one of the principal architects of Street View and the Google Maps database, but those were just warmups. “The Wright Brothers era is over,” Levandowski assured me, as the Lexus took us across the Dumbarton Bridge. “This is more like Charles Lindbergh’s plane. And we’re attempting to make it as sturdy and reliable as a 747.”
Not everyone finds this prospect appealing. As a commercial for the Dodge Charger put it two years ago, “Hands-free driving, cars that park themselves, an unmanned car driven by a search-engine company? We’ve seen that movie. It finishes with robots harvesting our bods for energy.” Levandowski understands the sentiment. He just has more faith in robots than most of us do. “People think that we’re going to pry the steering wheel from their cold, dead mitts,” he told me, but they have it exactly wrong. Someday soon, he believes, a self-driving car will save your life.
The Google car is an old-fashioned sort of science fiction: this year’s model of last century’s make. It belongs to the gleaming, chrome-plated age of jet packs and rocket ships, transporter slats and cities underneath the sea, of a predicted future still well beyond our technology. In 1939, at the World’s Fair in Fresh York, visitors stood in lines up to two miles long to see the General Motors Futurama exhibit. Inwards, a conveyor belt carried them high above a miniature landscape, spread out underneath a glass dome. Its suburbs and skyscrapers were laced together by superhighways utter of radio-guided cars. “Does it seem strange? Unbelievable?” the announcer asked. “Remember, this is the world of 1960.”
Not fairly. Skyscrapers and superhighways made the deadline, but driverless cars still putter along in prototype. Human beings, as it turns out, aren’t effortless to improve upon. For every accident they cause, they avoid a thousand others. They can weave through taut traffic and anticipate danger, gauge distance, direction, tempo, and momentum. Americans drive almost three trillion miles a year, I was told by Ron Medford, a former deputy administrator of the National Highway Traffic Safety Administration who now works for Google. It’s no wonder that we have thirty-two thousand fatalities along the way, he said. It’s a wonder the number is so low.
Levandowski keeps a collection of vintage illustrations and newsreels on his laptop, just to remind him of all the failed schemes and fizzled technologies of the past. When he displayed them to me one night at his house, his face wore a crooked sneer, like a father watching his son strike out in Little League. From 1957: A sedan cruises down a highway, guided by circuits in the road, while a family plays dominoes inwards. “No traffic jam . . . no collisions . . . no driver weariness.” From 1977: Engineers huddle around a driverless Ford on a test track. “Cars like this one may be on the nation’s roads by the year 2000!” Levandowski shook his head. “We didn’t come up with this idea,” he said. “We just got fortunate that the computers and sensors were ready for us.”
Almost from the beginning, the field divided into two rival camps: wise roads and brainy cars. General Motors pioneered the very first treatment in the late nineteen-fifties. Its Firebird III concept car—shaped like a jet fighter, with titanium tail fins and a glass-bubble cockpit—was designed to run on a test track embedded with an electrical cable, like the slot on a fucktoy speedway. As the car passed over the cable, a receiver in its front end picked up a radio signal and followed it around the curve. Engineers at Berkeley later went a step further: they spiked the track with magnets, alternating their polarity in binary patterns to send messages to the car—“Slow down, acute curve ahead.” Systems like these were fairly plain and reliable, but they had a chicken-and-egg problem. To be useful, they had to be built on a large scale; to be built on a large scale, they had to be useful. “We don’t have the money to fix potholes,” Levandowski says. “Why would we invest in putting wires in the road?”
Clever cars were more pliable but also more complicated. They needed sensors to guide them, computers to steer them, digital maps to go after. In the nineteen-eighties, a German engineer named Ernst Dickmanns, at the Bundeswehr University in Munich, tooled a Mercedes van with movie cameras and processors, then programmed it to go after lane lines. Soon it was steering itself around a track. By 1995, Dickmanns’s car was able to drive on the Autobahn from Munich to Odense, Denmark, going up to a hundred miles at a open up without assistance. Surely the driverless age was at arm! Not yet. Wise cars were just clever enough to get drivers into trouble. The highways and test tracks they navigated were stringently managed environments. The instant more variables were added—a pedestrian, say, or a traffic cop—their programming faltered. Ninety-eight per cent of driving is just following the dotted line. It’s the other two per cent that matters.
“There was no way, before 2000, to make something interesting,” the roboticist Sebastian Thrun told me. “The sensors weren’t there, the computers weren’t there, and the mapping wasn’t there. Radar was a device on a hilltop that cost two hundred million dollars. It wasn’t something you could buy at Radio Shack.” Thrun, who is forty-six, is the founder of the Google Car project. A wunderkind from the west German city of Solingen, he programmed his very first driving simulator at the age of twelve. Slender and sunburn, with clear blue eyes and a slick, seemingly boneless gait, he looks as if he just stepped off a dance floor in Ibiza. And yet, like Levandowski, he has a bounty for watching things through a machine’s eyes—for intuiting the logic by which it might apprehend the world.
When Thrun very first arrived in the United States, in 1995, he took a job at the country’s leading center for driverless-car research: Carnegie Mellon University. He went on to build robots that explored mines in Virginia, guided visitors through the Smithsonian, and chatted with patients at a nursing home. What he didn’t build was driverless cars. Funding for private research in the field had dried up by then. And however Congress had set a objective that a third of all ground combat vehicles be autonomous by 2015, little had come of the effort. Every so often, Thrun recalls, military contractors, funded by the Defense Advanced Research Projects Agency, would roll out their latest prototype. “The demonstrations I witnessed mostly ended in crashes and breakdowns in the very first half mile,” he told me. “ DARPA was funding people who weren’t solving the problem. But they couldn’t tell if it was the technology or the people. So they did this crazy thing, which was truly visionary.”
They held a race.
The very first DARPA Grand Challenge took place in the Mojave Desert on March 13, 2004. It suggested a million-dollar prize for what seemed like a ordinary task: build a car that can drive a hundred and forty-two miles without human intervention. Ernst Dickmanns’s car had gone similar distances on the Autobahn, but always with a driver in the seat to take over in the tricky opens up. The cars in the Grand Challenge would be empty, and the road would be rough: from Barstow, California, to Primm, Nevada. Instead of sleek kinks and long straightaways, it had rocky climbs and hairpin turns; instead of road signs and lane lines, G.P.S. waypoints. “Today, we could do it in a few hours,” Thrun told me. “But at the time it felt like going to the moon in sneakers.”
Levandowski very first heard about it from his mother. She’d seen a notice for the race when it was announced online, in 2002, and recalled that her son used to play with remote-control cars as a boy, crashing them into things on his bedroom floor. Was this so different? Levandowski was now a student at Berkeley, in the industrial-engineering department. When he wasn’t studying or rowing squad or winning Lego competitions, he was casting about for cool fresh shit to build—for a profit, if possible. “If he’s making money, it’s his confirmation that he’s creating value,” his friend Randy Miller told me. “I reminisce, when we were in college, we were at his house one day, and he told me that he’d rented out his bedroom. He’d put up a wall in his living room and was sleeping on a couch in one half, next to a big server tower that he’d built. I said, ‘Anthony, what the hell are you doing? You’ve got slew of money. Why don’t you get your own place?’ And he said, ‘No. Until I can budge to a stateroom on a 747, I want to live like this.’ ”
DARPA ’s rules were vague on the subject of vehicles: anything that could drive itself would do. So Levandowski made a bold decision. He would build the world’s very first autonomous motorcycle. This seemed like a stroke of genius at the time. (Miller says that it came to them in a hot bathtub in Tahoe, which sounds about right.) Good engineering is all about gaming the system, Levandowski says—about sidestepping obstacles rather than attempting to run over them. His dearest example is from a robotics contest at M.I.T. in 1991. Tasked with building a machine that could shoot the most Ping-Pong nuts into a tube, the students came up with dozens of ingenious contraptions. The winner, however, was infuriatingly ordinary: it had a mechanical arm reach over, drop a ball into the tube, then cover it up so that no others could get in. It won the contest in a single budge. The motorcycle could be like that, Levandowski thought: quicker off the mark than a car and more maneuverable. It could slip through tighter barriers and drive just as rapid. Also, it was a good way to get back at his mother, who’d never let him rail motorcycles as a kid. “Fine,” he thought. “I’ll just make one that rails itself.”
The flaw in this plan was demonstrable: a motorcycle can’t stand up on its own. It needs a rider to balance it—or else a sophisticated, computer-controlled system of shafts and motors to adjust its position every hundredth of a 2nd. “Before you can drive ten feet you have to do a year of engineering,” Levandowski says. The other racers had no such problem. They also had substantial academic and corporate backing: the Carnegie Mellon team was working with General Motors, Caltech with Northrop Grumman, Ohio State with Oshkosh trucking. When Levandowski went to the Berkeley faculty with his idea, the reaction was, at best, bemused disbelief. His adviser, Ken Goldberg, told him frankly that he had no chance of winning. “Anthony is most likely the most creative undergraduate I’ve encountered in twenty years,” he told me. “But this was a very excellent spread.”
Levandowski was unfazed. Over the next two years, he made more than two hundred cold calls to potential sponsors. He step by step scraped together thirty thousand dollars from Raytheon, Advanced Micro Devices, and others. (No motorcycle company was willing to put its name on the project.) Then he added a hundred thousand dollars of his own. In the meantime, he went about poaching the faculty’s graduate students. “He paid us in burritos,” Charles Clever, now a professor of mathematics at M.I.T., told me. “Always the same burritos. But I recall thinking, I hope he likes me and lets me work on this.” Levandowski had that effect on people. His mad enthusiasm for the project was matched only by his technical take hold of of its challenges—and his readiness to go to any lengths to meet them. At one point, he suggested Smart’s gf and future wifey five thousand dollars to break up with him until the project was done. “He was fairly serious,” Brainy told me. “She hated the motorcycle project.”
There came a day when Goldberg realized that half his Ph.D. students had been working for Levandowski. They’d begun with a Yamaha filth bike, made for a child, and stripped it down to its skeleton. They added cameras, gyros, G.P.S. modules, computers, roll bars, and an electrical motor to turn the wheel. They wrote ems of thousands of lines of code. The movies of their early test runs, edited together, play like a jittery reel from “The Benny Hill Show”: bike takes off, engineers leap up and down, bike falls over—more than six hundred times in a row. “We built the bike and rebuilt the bike, just sort of groping in the dark,” Brainy told me. “It’s like one of my colleagues once said: ‘You don’t understand, Charlie, this is robotics. Nothing actually works.’ ”
Eventually, a year into the project, a Russian engineer named Alex Krasnov cracked the code. They’d thought that stability was a elaborate, nonlinear problem, but it turned out to be fairly ordinary. When the bike tipped to one side, Krasnov had it steer ever so slightly in the same direction. This created centrifugal acceleration that pulled the bike upright again. By doing this over and over, tracing little S-curves as it went, the motorcycle could hold to a straight line. On the movie clip from that day, the bike wobbles a little at very first, like a baby giraffe finding its gams, then abruptly, confidently circles the field—as if guided by an invisible mitt. They called it the Ghost Rider.
The Grand Challenge proved to be one of the more humbling events in automotive history. Its foot consolation lay in collective misery. None of the fifteen finalists made it past the very first ten miles; seven broke down within a mile. Ohio State’s six-wheel, thirty-thousand-pound TerraMax was brought up brief by some bushes; Caltech’s Chevy Tahoe crashed into a fence. Even the winner, Carnegie Mellon, earned at best a Pyrrhic victory. Its robotic Humvee, Sandstorm, drove just seven and a half miles before careering off course. A helicopter later found it beached on an embankment, wreathed in smoke, its back wheels spinning so furiously that they’d burst into flame.
As for the Ghost Rider, it managed to hit out more than ninety cars in the qualifying round—a mile-and-a-half obstacle course on the California Speedway in Fontana. But that was its high-water mark. On the day of the Grand Challenge, standing at the commencing line in Barstow, half delirious with adrenaline and tiredness, Levandowski left behind to turn on the stability program. When the gun went off, the bike sputtered forward, flipped three feet, and fell over.
“That was a dark day,” Levandowski says. It took him a while to get over it—at least by his hyperactive standards. “I think I took, like, four days off,” he told me. “And then I was like, Hey, I’m not done yet! I need to go fix this!” DARPA evidently had the same thought. Three months later, the agency announced a 2nd Grand Challenge for the following October, doubling the prize money to two million dollars. To win, the teams would have to address a daunting list of failures and shortcomings, from fried hard drives to faulty satellite equipment. But the underlying issue was always the same: as Joshua Davis later wrote in Wired , the robots just weren’t wise enough. In the wrong light, they couldn’t tell a pubic hair from a boulder, a shadow from a solid object. They diminished the world to a giant marble labyrinth, then got caught in the thickets inbetween slots. They needed to raise their I.Q.
In the early nineties, Dean Pomerleau, a roboticist at Carnegie Mellon, had hit upon an unusually efficient way to do this: he let his car train itself. Pomerleau tooled the computer in his minivan with artificial neural networks, modelled on those in the brain. As he drove around Pittsburgh, they kept track of his driving decisions, gathering statistics and formulating their own rules of the road. “When we began, the car was going about two to four miles an hour along a path through a park—you could rail a tricycle swifter,” Pomerleau told me. “By the end, it was going fifty-five miles per hour on highways.” In 1996, the car steered itself from Washington, D.C., to San Diego with only minimal intervention—nearly four times as far as Ernst Dickmanns’s cars had gone a year earlier. “No Arms Across America,” Pomerleau called it.
Machine learning is an idea almost as old as computer science—Alan Turing, one of the fathers of the field, considered it the essence of artificial intelligence. It’s often the fastest way for a computer to learn a sophisticated behavior, but it has its drawbacks. A self-taught car can come to some strange conclusions. It may confuse the shadow of a tree for the edge of the road, or reflected headlights for lane markers. It may determine that a bag floating across a road is a solid object and swerve to avoid it. It’s like a baby in a stroller, deducing the world from the faces and storefronts that flicker by. It’s hard to know what it knows. “Neural networks are like black boxes,” Pomerleau says. “That makes people jumpy, particularly when they’re controlling a two-ton vehicle.”
Computers, like children, are more often instructed by rote. They’re given thousands of rules and bits of data to memorize— If X happens, do Y; avoid big rocks —then sent out to test them by trial and error. This is slow, painstaking work, but it’s lighter to predict and refine than machine learning. The trick, as in any educational system, is to combine the two in decent measure. Too much rote learning can make for a plodding machine. Too much experiential learning can make for blind catches sight of and caprice. The roughest roads in the Grand Challenge were often the easiest to navigate, because they had clear paths and well-defined shoulders. It was on the open, sandy trails that the cars tended to go crazy. “Put too much intelligence into a car and it becomes creative,” Sebastian Thrun told me.
The 2nd Grand Challenge put these two approaches to the test. Almost two hundred teams signed up for the race, but the top contenders were clear from the embark: Carnegie Mellon and Stanford. The C.M.U. team was led by the legendary roboticist William (Crimson) Whittaker. (Pomerleau had left the university by then to embark his own stiff.) A burly, mortar-headed ex-marine, Whittaker specialized in machines for remote and dangerous locations. His robots had crawled over Antarctic ice fields and active volcanoes, and studied the bruised nuclear reactors at Three Mile Island and Chernobyl. Seconded by a brilliant youthfull engineer named Chris Urmson, Whittaker approached the race as a military operation, best won by tremendous force. His team spent twenty-eight days laser-scanning the Mojave to create a computer model of its topography; then they combined those scans with satellite data to help identify obstacles. “People don’t count those who died attempting,” he later told me.
The Stanford team was led by Thrun. He hadn’t taken part in the very first race, when he was still just a junior faculty member at C.M.U. But by the following summer he had accepted an gifted professorship in Palo Alto. When DARPA announced the 2nd race, he heard about it from one of his Ph.D. students, Mike Montemerlo. “His assessment of whether we should do it was no, but his assets and his eyes and everything about him said yes,” Thrun recalls. “So he dragged me into it.” The contest would be a probe in opposites: Thrun the suave cosmopolitan; Whittaker the blustering field marshal. Carnegie Mellon with its two military vehicles, Sandstorm and Highlander; Stanford with its puny Volkswagen Touareg, nicknamed Stanley.
It was an even match. Both teams used similar sensors and software, but Thrun and Montemerlo concentrated more powerfully on machine learning. “It was our secret weapon,” Thrun told me. Rather than program the car with models of the rocks and bushes it should avoid, Thrun and Montemerlo simply drove it down the middle of a desert road. The lasers on the roof scanned the area around the car, while the camera looked further ahead. By analyzing this data, the computer learned to identify the vapid parts as road and the bumpy parts as shoulders. It also compared its camera photos with its laser scans, so that it could tell what vapid terrain looked like from a distance—and therefore drive a lot quicker. “Every day it was the same,” Thrun recalls. “We would go out, drive for twenty minutes, realize there was some software bug, then sit there for four hours reprogramming and attempt again. We did that for four months.” When they commenced, one out of every eight pixels that the computer labelled as an obstacle was nothing of the sort. By the time they were done, the error rate had dropped to one in fifty thousand.
On the day of the race, two hours before begin time, DARPA sent out the G.P.S. coördinates for the course. It was even stiffer than the very first time: more turns, narrower lanes, three tunnels, and a mountain pass. Carnegie Mellon, with two cars to Stanford’s one, determined to play it safe. They had Highlander run at a swift clip—more than twenty miles an hour on average—while Sandstorm draped back a little. The difference was enough to cost them the race. When Highlander began to lose power because of a pinched fuel line, Stanley moved ahead. By the time it crossed the finish line, six hours and fifty-three minutes after it began, it was more than ten minutes ahead of Sandstorm and more than twenty minutes ahead of Highlander.
It was a triumph of the underdog, of brain over brawn. But less for Stanford than for the field as a entire. Five cars finished the hundred-and-thirty-two-mile course; more than twenty cars went further than the winner had in 2004. In one year, they’d made more progress than DARPA ’s contractors had in twenty. “You had these crazy people who didn’t know how hard it was,” Thrun told me. “They said, ‘Look, I have a car, I have a computer, and I need a million bucks.’ So they were doing things in their home shops, putting something together that had never been done in robotics before, and some were insanely outstanding.” A team of students from Palos Verdes High School in California, led by a seventeen-year-old named Chris Seide, built a self-driving “Doom Buggy” that, Thrun recalls, could switch lanes and stop at stop signs. A Ford S.U.V. programmed by some insurance-company employees from Louisiana finished just thirty-seven minutes behind Stanley. Their lead programmer had lifted his preliminary algorithms from textbooks on video-game design.
“When you look back at that very first Grand Challenge, we were in the Stone Age compared to where we are now,” Levandowski told me. His motorcycle embodied that evolution. Albeit it never made it out of the semifinals of the 2nd race—tripped up by some wooden boards—the Ghost Rider had become, in its way, a miracle of engineering, hitting out seventy-eight four-wheeled competitors. Two years later, the Smithsonian added the motorcycle to its collection; a year after that, it added Stanley as well. By then, Thrun and Levandowski were both working for Google.
The driverless-car project occupies a lofty, garagelike space in suburban Mountain View. It’s part of a sprawling campus built by Silicon Graphics in the early nineties and repurposed by Google, the conquering army, a decade later. Like a lot of high-tech offices, it’s a combination of the whimsical and the workaholic—candy-colored sheet metal over a sprung-steel chassis. There’s a Foosball table in the lobby, exercise ball-sac in the sitting room, and a row of what look like clown bicycles parked out front, free for the taking. When you walk in, the very first things you notice are the wacky tchotchkes on the desks: Smurfs, “Star Wars” fucktoys, Rube Goldberg devices. The next things you notice are the desks: row after row after row, each with someone staring hard at a screen.
It had taken me two years to build up access to this place, and then only with a staff member shadowing my every step. Google guards its secrets more jealously than most. At the gourmet cafeterias that dot the campus, signs warn against “tailgaters”—corporate spies who might slink in behind an employee before the door swings shut. Once inwards, however, the atmosphere shifts from vigilance to an almost missionary zeal. “We want to fundamentally switch the world with this,” Sergey Brin, the co-founder of Google, told me.
Brin was dressed in a charcoal hoodie, baggy pants, and sneakers. His scruffy beard and vapid, piercing gawp gave him a Rasputinish quality, dulled somewhat by his Google Glass eyewear. At one point, he asked if I’d like to attempt the glasses on. When I’d placed the miniature projector in front of my right eye, a single line of text floated poignantly into view: “3:51 P . M . It’s okay.”
“As you look outside, and walk through parking lots and past multilane roads, the transportation infrastructure predominates,” Brin said. “It’s a large tax on the land.” Most cars are used only for an hour or two a day, he said. The rest of the time, they’re parked on the street or in driveways and garages. But if cars could drive themselves, there would be no need for most people to own them. A fleet of vehicles could operate as a personalized public-transportation system, picking people up and ripping off them off independently, waiting at parking lots inbetween calls. They’d be cheaper and more efficient than taxis—by some calculations, they’d use half the fuel and a fifth the road space of ordinary cars—and far more nimble than buses or subways. Streets would clear, highways shrink, parking lots turn to parkland. “We’re not attempting to fit into an existing business model,” Brin said. “We are just on such a different planet.”
When Thrun and Levandowski very first came to Google, in 2007, they were given a simpler task: to create a virtual map of the country. The idea came from Larry Page, the company’s other co-founder. Five years earlier, Page had strapped a movie camera on his car and taken several hours of footage around the Bay Area. He’d then sent it to Marc Levoy, a computer-graphics pro at Stanford, who created a program that could paste such footage together to demonstrate an entire streetscape. Google engineers went on to jury-rig some vans with G.P.S. and rooftop cameras that could shoot in every direction. Eventually, they were able to launch a system that could showcase three-hundred-and-sixty-degree panoramas for any address. But the equipment was unreliable. When Thrun and Levandowski came on board, they helped the team retool and reprogram. Then they tooled a hundred cars and sent them all over the United States.
Google Street View has since spread to more than a hundred countries. It’s both a practical device and a kind of magic trick—a spyglass onto distant worlds. To Levandowski, tho’, it was just a commence. The same data, he argued, could be used to make digital maps more accurate than those based on G.P.S. data, which Google had been leasing from companies like NAVTEQ . The street and exit names could be drawn straight from photographs, for example, rather than faulty government records. This sounded plain enough but proved to be fiendishly complicated. Street View mostly covered urban areas, but Google Maps had to be comprehensive: every logging road logged on a computer, every gravel drive driven down. Over the next two years, Levandowski shuttled back and forward to Hyderabad, India, to train more than two thousand data processors to create fresh maps and fix old ones. When Apple’s fresh mapping software failed so spectacularly a year ago, he knew exactly why. By then, his team had spent five years injecting several million corrections a day.
Street View and Maps were logical extensions of a Google search. They displayed you where to locate the things you’d found. What was missing was a way to get there. Thrun, despite his victory in the 2nd Grand Challenge, didn’t think that driverless cars could work on surface streets—there were just too many variables. “I would have told you then that there is no way on earth we can drive securely,” he says. “All of us were in denial that this could be done.” Then, in February of 2008, Levandowski got a call from a producer of “Prototype This!,” a series on the Discovery Channel. Would he be interested in building a self-driving pizza delivery car? Within five weeks, he and a team of fellow Berkeley graduates and other engineers had retrofitted a Prius for the purpose. They patched together a guidance system and persuaded the California Highway Patrol to let the car cross the Bay Bridge—from San Francisco to Treasure Island. It would be the very first time an unmanned car had driven legally on American streets.
On the day of the filming, the city looked as if it were under martial law. The lower level of the bridge was closed to regular traffic, and eight police cruisers and eight motorcycle cops were assigned to accompany the Prius over it. “Obama was there the week before and he had a smaller escort,” Levandowski recalls. The car made its way through downtown and crossed the bridge in fine form, only to wedge itself against a concrete wall on the far side. Still, it gave Google the nudge that it needed. Within a few months, Page and Brin had called Thrun to green-light a driverless-car project. “They didn’t even talk about budget,” Thrun says. “They just asked how many people I needed and how to find them. I said, ‘I know exactly who they are.’ ”
Every Monday at eleven-thirty, the lead engineers for the Google car project meet for a status update. They mostly cleave to a familiar Silicon Valley demographic—white, masculine, thirty to forty years old—but they come from all over the world. I counted members from Belgium, Holland, Canada, Fresh Zealand, France, Germany, China, and Russia at one sitting. Thrun began by cherry-picking the top talent from the Grand Challenges: Chris Urmson was hired to develop the software, Levandowski the hardware, Mike Montemerlo the digital maps. (Urmson now directs the project, while Thrun has shifted his attention to Udacity, an online education company that he co-founded two years ago.) Then they branched out to prodigies of other sorts: lawyers, laser designers, interface gurus—anyone, at very first, except automotive engineers. “We hired a fresh breed,” Thrun told me. People at Google X had a habit of telling that So-and-So on the team was the smartest person they’d ever met, till the virtuous circle closed and almost everyone had been singled out by someone else. As Levandowski said of Thrun, “He thinks at a hundred miles an hour. I like to think at ninety.”
When I walked in one morning, the team was slouched around a conference table in T-shirts and jeans, discussing the difference inbetween the Gregorian and the Julian calendar. The subtext, as usual, was time. Google’s objective isn’t to create a glorified concept car—a flashy idea that will never make it to the street—but a polished commercial product. That means real deadlines and continual tests and redesigns. The main topic for much of that morning was the user interface. How aggressive should the warning sounds be? How many pedestrians should the screen showcase? In one version, a jaywalker appeared as a crimson dot outlined in white. “I truly don’t like that,” Urmson said. “It looks like a real-estate sign.” The Dutch designer nodded and promised an alternative for the next round. Every week, several dozen Google volunteers test-drive the cars and pack out user surveys. “In God we trust,” the company faithful like to say. “Everyone else, bring data.”
In the beginning, Brin and Page introduced Thrun’s team with a series of DARPA -like challenges. They managed the very first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the tour was disqualified. “I recall thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”
They began the project with Levandowski’s pizza car and Stanford’s open-source software. But they soon found that they had to rebuild from scrape: the car’s sensors were already outdated, the software just glitchy enough to be futile. The DARPA cars hadn’t worried themselves with passenger convenience. They just went from point A to point B as efficiently as possible. To sleek out the rail, Thrun and Urmson had to make a deep investigate of the physics of driving. How does the plane of a road switch as it goes around a curve? How do tire haul and deformation affect steering? Braking for a light seems plain enough, but good drivers don’t apply constant pressure, as a computer might. They build it step by step, hold it for a moment, then back off again.
For complicated moves like that, Thrun’s team often began with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car instruct itself to read street signs, for example, but they underscored that skill with specific instructions: “ STOP ” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to response them before. The DARPA cars didn’t even bother to distinguish inbetween road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.
Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was fully clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane switches: if you commence to pull into a gap and the driver in that lane moves forward, he’s providing you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”
It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips. The very first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little woman,” Levandowski told me. One of the last commenced in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they eventually arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Truly, they’d just begun.
These days, Levandowski and the other engineers divide their time inbetween two models: the Prius, which is used to test fresh sensors and software; and the Lexus, which offers a more refined but limited rail. (The Prius can drive on surface streets; the Lexus only on highways.) As the cars have evolved, they’ve sprouted appendages and lost them again, like vat-grown creatures in a science-fiction movie. The cameras and radar are now tucked behind sheet metal and glass, the laser turret diminished from a highway cone to a sand pail. Everything is smaller, sleeker, and more powerful than before, but there’s still no mistaking the cars. When Levandowski picked me up or dropped me off near the Berkeley campus on his commute, students would look up from their laptops and squeal, then run over to take snapshots of the car with their phones. It was their version of the Oscar Mayer Wienermobile.