Google Defends Itself After Self-Driving Car Gets in a Wreck

One of the promises of autonomous driving is supposed to be accident-free roadways. But consumers who are already skeptical of self-driving technology had their suspicions confirmed this week when Google acknowledged its autonomous car was partially responsible for an accident. The acknowledgment comes just months after Google had criticized California for proposed regulations that would mandate autonomous cars have components that would allow a human driver to take over, including a steering wheel, throttle and brake pedals.

Google has been arguing that it should be able to test its vehicles on public roadways without steering wheels and other controls. In November, it bragged that Google cars had only been involved in 17 minor incidents during six years of testing and more than two million miles of autonomous driving.

Now Google has changed its stance after news of the accident was leaked. A self-driving car struck a municipal bus in Mountain View California, after maneuvering to avoid some sandbags. The vehicle was going at about 2 mph while the bus was moving at speeds of 15 mph. Google admitted it played a role in the accident, which caused no injuries, because the car assumed the bus would leave it more room. Google said its cars would now be reprogrammed:

“From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future,” Google said in a statement.

The amusing statement means that Google’s cars will need to learn what all human drivers already know: big vehicles are going to expect you to get out of their way or risk being hit.



Chris Urmson, the director of Google’s self-driving car project, said “We can’t program [the cars] for every conceivable event, there’s an infinite number of them. And so the trick, really, is to have the vehicles generalize what they’ve observed in the past and be able to understand when they don’t know what’s going on. And in those situations, do the cautious thing.”

Until the day when self-driving cars are communicating with self-driving buses, this learning experience will be intense for the artificial intelligence-controlled vehicles.

In fairness to Google, the promise of autonomous driving is not that there will never be accidents. Moving vehicles will never be accident and error free. Google fears that if lawmakers require too many components that humans could override, it would make the driving experience less safe, since humans would override the self-driving car’s decisions, which are theoretically safer than what the human would come up with.

Equating a human driver with an artificial intelligence system presents many legal hurdles. Although Google and other companies complain about regulations impairing their testing plans, in many cases existing regulations require certain safety equipment, and these laws cannot simply be waived. Many issues are years from being resolved, particularly those involving liability and auto insurance. The legal and insurance industries could be significantly impacted by autonomous driving, although lawyers and insurance agents, like cockroaches, always manage to find a way to survive.

Like this type of content? Subscribe to our newsletter to not miss another update.


  1. This is somehow hilarious.

  2. Much ado about nothing.

  3. I get their point about too many manual overrides, but during testing surely they need a human while the car “learns” what’s going on.

    • How do you teach cars that other drivers are assholes? Asking for a friend.

Leave A Comment