Will we drive cars in future?
Cars are already driving themselves on roads in California, Texas, Arizona, Washington, Pennsylvania, and Michigan. Though they are still restricted to specific test areas and driving conditions, it’s pretty clear that at some point in the future, completely driverless cars will be a mainstream reality.
Predictions for the arrival of fully autonomous vehicles range from a few years to a few decades, a disparity exacerbated by varying definitions of “autonomous.” While some people use this word to describe cars that self-drive only in specific conditions, others peg their estimates to the point at which cars are so autonomous that they don’t even need a steering wheel, or a brake pedal.
The Dichotomy of Driving
The moral argument for mandating autonomy is in direct conflict with a fundamental truth underlying human nature. For many, acquiring a driver’s license is a rite of passage that leads to an ownership dream more accessible than home-ownership, and makes real a concept of freedom inextricably linked to a machine that doubles as a figurative expression of self.
On a cultural level, cars are not merely transportation, but transformation. How we drive and what we drive are two axes on the chart of self-expression. However more efficient or safe an autonomous vehicle, it may prove impossible to convince people to relinquish this channel of self-expression without a massive generational shift, if ever, unless forced. If cars were merely transportation, sports cars wouldn’t exist. Nor would the majority of highly profitable cosmetic and performance options, or the entire aftermarket sector.
Sensor Technologies used in self driving
Just like a human driver uses eyes to see the road ahead and transfers visual data to the brain, an automated vehicle will have to use a combination of sensors to transmit data about the nearby environment to its computer processors. Think how much safer a human driver would be if she had eight eyes, not two. Prototype vehicles today are equipped with bulky equipment on the roof, where it’s easier for sensors to get a 360-degree view of the vicinity. All that gear is basically a collection of two different types of sensors. First is an array of cameras, which takes in the same type of visual information that the human eye does—only in multiple directions at the same time—then feeds that information to a computer. With enough cameras, blind spots are eliminated. Narrower-range cameras can clearly see distances beyond human vision. Wide-angle cameras offer superior peripheral vision.
Today’s self-driving cars are sometimes described like teenagers: relatively safe in limited situations, not nearly as safe as an experienced human driver. Programming cars to make them safe enough to be let loose on busy roads requires painstaking programming of real-life situations, machine learning, and artificial intelligence so that they can recognize what’s happening in every conceivable circumstance. They have to process their environment and make safe decisions even about things they’re encountering for the first time. There are essentially two ways to train a vehicle to anticipate the unexpected: Program in every possible eventuality, or teach a vehicle to learn and think for itself.
Self Driving acceptance Will Not Pop Up Overnight
Over time, a patchwork quilt of autonomy-mandated, autonomy-optional, and autonomy-banned zones will emerge. An equilibrium between technologies will be reached, followed by a long plateau upon which people will slowly come to accept increasing levels of automation. Parallel forms of transportation will evolve and interlock with semi and fully autonomous driving, finally giving meaning to the word “mobility”, which has so far been no more than a catchphrase concealing a lack of clear vision for transportation’s future.