top of page

Week 11: Autonomous Vehicles

There are several motivations for developing self-driving cars. Probably the biggest motivation for this is safety. Cars driven by computers and using sensors would be safer than cars driven by humans as they don’t have the weaknesses that humans do, such as getting tired at the wheel or driving recklessly. Another reason is that they could save people time and money as they would be able to be productive during commutes and long drives, rather than spending that time focusing on driving. In addition they have the potential to improve traffic efficiency and reduce pollution. Another motivation for the companies developing self-driving cars is the profits involved with creating a fully functioning autonomous vehicle. The arguments for self-driving cars are that they would be safer than human drivers, as human drivers are prone to mistakes. One arguments against this is that possible systems failures in the autonomous vehicles make them less safe than human drivers. Another argument against self-driving cars is the need to program moral decisions into the car, such as whether to protect the driver or pedestrians first.

The social dilemma of self-driving cars is very difficult. The car must be explicitly programmed on how to handle life or death situations where the car may need to pick between two different people dying. I think that the way that makes the most sense is for programmers to tackle this issue by always having the car choose the course of action that endanger the least number of lives, whether that means sacrificing or saving the passengers of the car. This is the most objective approach to programming artificial intelligence to handle life and death scenarios. Whatever approach is decided to be taken, it is crucial that the details of the car’s decision making in these scenarios be made clear to buyers of the car so that they are aware that their life may be forfeit should they get into a situation where they need to be sacrificed to save more people. The liability of accidents depends on the accident. If the car malfunctions and causes an accident, then the creators of the vehicle should be liable. If, however, and accident occurs due to human interference or stupidity, then the human is liable.

The social impact of self-driving cars will be huge. Whatever regulations are placed on autonomous vehicles decision making in life or death scenarios will likely spark a large moral debate. In addition, if self-driving cars become common, the whole dynamic of society will change as the way people approach car trips and commutes will completely change. The economic impact should be quite significant too. Companies that are involved in the autonomous vehicle industry will likely see huge profits, while car companies that aren’t involved may struggle to compete. Politically, there will be a lot of debate and legislation on how involved the government should be on regulations on the cars. I think the government should have a hand in creating some guidelines for self-driving cars to standardize some of the decision making criteria.

Personally, I do not want a self-driving car. While it could be nice to be able to relax or do work while traveling rather than focusing on the road, I don’t think I would be trusting enough to completely let the car take over, and would likely end up watching what the car is doing anyway, ready to intervene should I need to. Even though a functioning autonomous vehicle would likely be safer and less accident prone than me, I prefer to be in control of my own life, rather than leaving it up to a machine. In addition, I enjoy driving cars and that’s not something I really want to give up.


bottom of page