In October 2014, electric car manufacturer Tesla offered its customers a “technology package” for an additional $4,250 cost upon the purchase of one of their electric vehicles. According to MIT Technology Review, this technology included ultrasonic sensors and cameras surrounding the car which would predict potential collisions and allow the car to assume control in dangerous situations in order to prevent crashes. As groundbreaking as this technology was, it was truly the precursor to Tesla’s vision: a completely autonomous car.
After gathering data for a year, Tesla sent out a software update in October 2015 to 60,000 vehicles that had been purchased with this technology package. This update now allows the driver to enable an autopilot mode where the vehicle drives itself. The sensors on the car allows it to detect traffic, change lanes, control speed, and even park the vehicle once the driver arrives at the destination.
Although Tesla’s endgame is to make a completely autonomous car, Tesla CEO Elon Musk refers to the current software as more of “a beta release.” According to Slate, the autopilot feature does not currently remove all responsibilities from the driver. The car will warn the driver to return his hands to the wheel if he removes them for too long, and Tesla discourages their drivers from relying on the software completely. The company continues to gather data to improve the software so they can move towards true autonomous driving. Despite this, there have been several Tesla owners who have posted YouTube videos of them testing the limits of the technology and finding themselves having to take over to avoid a potential crash. In the following video, on YouTuber even goes as far to say that the vehicle tried to kill him.
In the meantime, other car companies such as Toyota and Google are also working on autonomous car projects. However, Google will not release their model of vehicle, which removes manual controls completely, until their testing deems it completely safe. Toyota worries that possible errors in Tesla’s software will cause accidents and alienate drivers from autonomous cars before the software has been perfected to the point of complete reliability. So far, though, Tesla’s software has remained solid in protecting it’s drivers and serving its anti-collision purpose.
Of course, Tesla’s autopilot feature raises many issues in the man-versus-machine category. Can the possibility of a software malfunction or system failure truly outweigh the risks of human error? If an accident occurs due to a system failure of the automated vehicle, is the driver or the manufacturer liable? I would say the answer to the first question is entirely in Tesla’s hands when it comes to how they continue to gather information and data to keep their drivers safe. The answer to the second question is more difficult to come by and would probably be best examined on a case-to-case basis.
I’m very curious to see how this technology continues to develop and how it changes the face of travel in our world. If Google pulls through with their completely autonomous vehicles that don’t even have a means for manual control, we might be seeing empty cars travelling the highway to pick up passengers who ordered them from their smartphones!