The ultimate goal of the automotive industry is to have your car drive itself, but the process isn’t simple. Artificial Intelligence in Autonomous Vehicles today
Self-driving cars are a major advancement in automotive history. However, the arrival of driverless vehicles is taking longer than anticipated. Recent predictions suggest that a fully self-driving car won’t be developed by the automotive industry until 2035.
While everyone agrees that autonomous vehicles are the future, the timing of their arrival is a topic of much debate.
The road to full autonomy is more complicated than it seems, despite the enthusiasm from the automotive industry and its eager customers. Advancing self-driving systems requires not only technological progress but also acceptance by society and adherence to regulations. There are numerous factors to consider Safety, reliability, adapting infrastructure, and legal frameworks are all crucial aspects that careful demand consideration before self-driving cars can gain widespread acceptance.
Now, let’s consider the timeline. Cars currently in production will likely remain on the road for at least 20 years or more. Although these cars are partially automated, they are not fully autonomous. This means the transition to completely self-driving cars will be gradual, and human drivers will continue to share the roads with autonomous vehicles for quite some time. The mixed traffic presents a whole set of yet challenges to be discovered.
In spite of these hurdles, researchers are using artificial intelligence (AI) to speed up the development of driverless vehicles. They are working on new methods that utilize reinforcement learning with neural networks to improve the performance and safety of self-driving cars. are part of a broader trend in the automotive industry, where AI and machine learning technologies are increasingly driving innovation.
The environment seems to concur. Looking at the data from CES 2024, it’s clear that the automotive sector is emphasizing sustainability and AI-driven technologies. Advanced features such as lidar sensors, which use pulsed laser light to measure distances, are playing a crucial role in the advancement of autonomous vehicles.
It’s fair to say that technological progress is a key factor in advancing self-driving systems. Whether through lidar, advanced driver-assistance systems (ADAS), or intelligent speed assistance (ISA), no innovation in driverless car systems can go very far without location technology. Combining location data with AI can enable cars to better understand their surroundings, enabling them to make informed decisions that improve safety and efficiency on the road.
Despite the constant innovations that continue to enhance safety and efficiency, there is a discussion to be had about how autonomous vehicles will integrate into traffic and whether they should somehow be distinctive. Unlike traditional cars where the emphasis is on driving, autonomous vehicles prioritize the passenger experience. This shift in focus brings new design considerations.
For example, without the need for a driver, the interior space of the cockpit can be reimagined to enhance comfort, safety, and convenience. While some argue that self-driving cars should resemble traditional cars, others believe that their unique functionality and priorities require a more recognizable design. Only time will tell.
As advancements in self-driving systems and the integration of AI and other in-vehicle technologies continue, a future where driverless cars are a common sight on the streets is slowly shifting from a concept to a reality. While self-driving cars may not be a frequent sight on today’s roads, they are certainly on the horizon.
When it comes to the future of travel, self-driving technology is changing the conversation. However, do you truly understand the different levels of autonomous vehicles beyond the excitement?
The term automated driving has become synonymous with self-driving cars, but in reality, it covers a wide range of technologies and capabilities.
The Society of Automotive Engineers (SAE) has defined six SAE Levels of Driving AutomationTM, ranging from Level 0 (no automation) to Level 5 (full automation). Each level represents a different degree of control over the vehicle, from basic driver assistance features to fully autonomous operation.
Despite all the buzz around autopilots and artificial intelligence (AI), most cars worldwide still require a human to handle all navigation tasks. Although recent advancements might imply that we are at Level 2, market analysis shows that less than 10% of cars currently use automation technologies higher than Level 1, which paints a very different picture from the anticipated takeover by self-driving cars.
Prioritize Safety
As advancements in AI continue and regulations catch up, we can anticipate an increase in the number of vehicles achieving higher levels of automation. The current global autonomous market is close to US$2 billion. However, it is projected to reach just over US$13.5 billion by 2030, marking an almost sevenfold increase in six years.
Safety is a key driver behind the progress of automated driving. Approximately 1.35 million individuals lose their lives each year in road crashes, with human error playing a significant role. Many believe that the adoption of advanced driver assistance systems (ADAS) and fully autonomous technology could significantly reduce these numbers.
Original ADAS
Despite the perception that ADAS is a relatively recent technology, the first adaptive cruise control system was actually introduced by Mercedes-Benz in 1999, laying the groundwork for today’s advanced driver assistance systems.
In the early 2000s, car manufacturers started integrating additional ADAS features like lane-keeping assist, automatic emergency braking, and blind-spot detection. These developments led to more sophisticated systems such as traffic sign recognition and driver monitoring, enhancing the vehicle’s ability to support the driver.
Advancement
Although fully autonomous vehicle technology is progressing rapidly, the infrastructure required to support it is still in its early phases. For instance, road markings and signs need to be standardized and easily recognizable by AI systems. Additionally, roads must be equipped with advanced sensors and communication systems to enable safe interaction between autonomous vehicles and other road users.
The future will heavily depend on vehicle-to-everything (V2X) communication, allowing cars to communicate with each other and infrastructure to enhance safety and traffic management. This technology is anticipated to become more widespread as we move toward higher levels of automation.
Crucial Foundation
With smart vehicles becoming increasingly reliable and integrated into our daily lives, cybersecurity has emerged as a vital concern. Hackers pose a real threat to the levels of automation achieved so far. To address these concerns, experts are developing security solutions to safeguard autonomous cars from hacking attempts and unauthorized access.
The advent of self-driving vehicles represents a significant shift in transportation, and smart cars are predicted to revolutionize the way we drive permanently.
As the automotive industry progresses through the six SAE Levels of Driving AutomationTM, vehicles are growing more intelligent and intuitive by the day.
In this piece, we delve into the benefits and challenges of artificial intelligence (AI) and robotics in the evolution of autonomous driving.
The high ground
Luxury vehicles available on the market today have come a long way from just a few years ago. They still transport you from point A to point B, but the travel experience has transformed significantly since the introduction of AI and robotics.
Thanks to sophisticated technology enabling features such as autonomous steering, acceleration, and braking (under various conditions), the latest cars and trucks can now make informed decisions that enhance our safety, comfort, and entertainment.
Here’s how.
AI’s main advantage lies in its ability to analyze data from different sensors and cameras, enabling vehicles to better understand their surroundings. Robotics facilitate the execution of intricate tasks such as navigating through traffic, parking, and predicting potential road hazards.
Together, they can take over the most stressful, unpredictable, and tiring parts of driving. This not only improves traffic safety, efficiency, and environmental impact but also allows human drivers to enjoy stress-free rides. Despite the promising progress made, it has not been without challenges.
Learning process
Driving a car that ensures you are in the correct lane and traveling at the appropriate speed while your favorite music playlist plays in the background is wonderful, but full autonomy is still a long way off. The reason might surprise you. As proficient as robotics are in advanced functionalities, the missing element that makes us human could be the greatest obstacle for robots to achieve full autonomy.
This is because, in addition to looking the part, AI and robotics lack one human trait. This trait is social interaction. Daily interactions with other drivers, cyclists, and pedestrians that are natural to human drivers pose a unique challenge for AI.
Situations such as interpreting hand signals from a traffic officer or understanding a pedestrian’s intention to cross the road are areas where humans excel, but this aspect still requires improvement in autonomous driving.
A two-way street
Robots may have a long way to go before they can recognize if another driver has just gestured a thank you, but while they struggle with human interaction, they compensate with other potential advantages. Despite their difficulty in understanding hand gestures, the same advanced features that are gradually enabling driverless cars are likely to also transform the future of the maritime industry through the creation of autonomous shipping ports.
Tasks such as loading and unloading are now handled by automated cranes and self-driving trucks, while AI algorithms are used to optimize routing and scheduling. These innovations not only enhance productivity but also play a crucial role in significantly reducing carbon emissions.
Moving forward
As we continue to advance AI and robotics, these two technologies are not only turning vehicles into autonomous entities capable of making informed decisions, but also revolutionizing our entire approach to transportation. With each new level of automation, the collaboration between robotics and AI will continue to bring us closer to a future of fully autonomous cars where humans are merely content passengers.
Suddenly, a person on a bike dressed as the Easter bunny appears and rides across the road. For the driver behind, there is a moment of surprise, a quick look in the rearview mirror, followed by slamming on the brakes. The driver quickly steers away from the cyclist, reacting impulsively. Whether it’s the Easter Bunny or a person in a costume is insignificant in this situation; the driver perceives an obstacle and responds accordingly.
It’s a completely different situation when artificial intelligence is in control. It hasn’t “learned” the scenario of an “Easter bunny on a bicycle” and therefore cannot clearly identify the object in front of it. Its reaction is uncertain. In the worst -case scenario, the AI becomes “confused” and makes the wrong decision.
A well-known driving test conducted by US researchers demonstrated the consequences of AI confusion. When a sticker was placed on a stop sign, the AI interpreted the sign not as an instruction to stop, but as a speed limit. The system chose a familiar option instead of issuing a warning. Incorrect decisions like this can have fatal results.
An ambiguous reality
“A perception AI that has never encountered a skater has no chance of correctly identifying them,” explains Sven Fülster, one of the four founders of the Berlin-based start-up Deep Safety. This is a challenge that the entire industry is grappling with Established in 2020, the company aims to address the biggest challenge of autonomous driving: preparing artificial intelligence for the unpredictability of real-life situations.
fortunately, encounters with cycling Easter bunnies are rare. In principle, AI can contribute to increased safety on the road. More than 90 percent of all traffic accidents are attributed to human error. AI systems can process, calculate, and interpret an almost unimaginable volume of data simultaneously. They are not distracted by smartphones, radios, or passengers. They do not get tired as long as the power supply and technology are functioning properly. Moreover, the more new data they process, the more precisely they operate.
However, real life presents an infinite combination of possibilities, and not every eventuality can be trained and tested. The most dangerous scenario is the misinterpretation of unforeseen traffic situations by technical systems: within traffic or at a stop sign. Or when encountering the Easter bunny .
An educational process for artificial intelligence
Will it ever be possible to navigate safely through the hectic rush hour of London, Cologne, Paris, or Berlin while relaxing at the wheel and reading the newspaper? “Certainly,” say the entrepreneurs at Deep Safety, who are sending their AI to driving school. “We are developing an AI that can admit when it doesn’t know something.”
Sven Fülster, CEO of the start-up, explains: “With our technology, a driverless car can comprehend the world on a much deeper level. We have incorporated what humans learn in driving school: anticipating and understanding the movements of others while thinking ahead .”
Deep Safety’s offering is named BetterAI. “We understand that AI, unlike humans, will interpret unknown situations in unpredictable ways. BetterAI is the first AI certified to meet the ISO26262 security standard, recognizing unknown situations, unknown entities, and people engaging in unknown behaviors ,” explain the entrepreneurs.
For instance, Deep Safety’s Perception AI can effectively manage unknown scenarios and ambiguous cases on the road. It can also identify the Easter Bunny on a bicycle – perhaps not as a person in disguise, but still as an unidentifiable object from which distance should be maintained .Current vehicle models’ AIs cannot accomplish this.
Real-time data analysis
Sebastian Hempel, Chief Technology Officer at Deep Safety, elucidates the reason why this seemed unattainable for a long time: “The challenge is to execute real-time analysis of perceptual data – what the camera ‘sees.’ It takes a considerable amount of time to process an image. Moreover, 30 images must be processed per second.” Deep Safety’s AI has reached a stage where this is possible, and was able to do so.
The creators of Deep Safety firmly believe that their technology can prevent similar misunderstandings by AI systems in the future. Their vision is ambitious: “Our immediate aim is to enhance the driver assistance systems currently in use on the roads,” says Fülster. “In the near future, our BetterAI will render the driver unnecessary. Ultimately, we aim to introduce autonomous driving to urban areas.”
In recent years, Artificial Intelligence (AI) has made a significant impact on the automotive sector, driving the development of level-4 and level-5 autonomous vehicles. Despite being in existence since the 1950s, the surge in AI’s popularity can be attributed to the vast amount of available data today. The proliferation of connected devices and services enables the collection of data across every industry, fueling the AI revolution.
While advancements are being pursued to enhance sensors and cameras for data generation in autonomous vehicles, Nvidia revealed its initial AI computer in October 2017 to facilitate deep learning, computer vision, and parallel computing algorithms. AI has become an indispensable element of automated drive technology, and understanding its functioning in autonomous and connected vehicles is crucial.
What is Artificial Intelligence?
The term “Artificial Intelligence” was coined by computer scientist John McCarthy in 1955. AI refers to the capability of a computer program or machine to think, learn, and make decisions. In a broader sense, it signifies a machine that emulates human cognition. Through AI, we enable computer programs and machines to perform tasks akin to human actions by feeding them copious amounts of data, which is analyzed and processed to facilitate logical thinking. Automating repetitive human tasks signifies just the beginning of AI’s potential, with medical diagnostic equipment and autonomous vehicles employing AI to save human lives.
The Growth of AI in Automotive
The automotive AI market was valued at $783 million in 2017 and is projected to reach nearly $11k million by 2025, with a CAGR of about 38.5%. IHS Markit predicted a 109% increase in the installation rate of AI-based systems in new vehicles by 2025, compared to the 8% adoption rate in 2015. AI-based systems are expected to become standard in new vehicles, particularly in two categories: infotainment human-machine interface and advanced driver assistance systems (ADAS) and autonomous vehicles.
The largest and fastest-growing technology in the automotive AI market is expected to be deep learning, a technique for implementing machine learning to achieve AI. Currently, it is employed in various applications such as voice recognition, recommendation engines, sentiment analysis, image recognition , and motion detection in autonomous vehicles.
How Does AI Work in Autonomous Vehicles?
AI is now a ubiquitous term, but how does it function in autonomous vehicles?
Let’s first consider the human aspect of driving, where sensory functions like vision and sound are used to observe the road and other vehicles. Our driving decisions, such as stopping at a red light or yielding to pedestrians, are influenced by memory. Years of driving experience train us to notice common elements on the road, like a quicker route to the office or a noticeable bump.
Although autonomous vehicles are designed to drive themselves, the objective is for them to mirror human driving behaviors. Achieving this involves providing these vehicles with sensory functions, cognitive capabilities (such as memory, logical thinking, decision-making, and learning), and executive functions that replicate human driving practices. The automotive industry has been continuously evolving to accomplish this in recent years.
According to Gartner, by 2020, approximately 250 million cars will be interconnected with each other and the surrounding infrastructure through various V2X (vehicle-to-everything communication) systems. As the volume of data input into in-vehicle infotainment (IVI) units and telematics systems increases, vehicles can capture and share not only their internal system status and location data, but also real-time changes in their surroundings. Autonomous vehicles are equipped with cameras, sensors, and communication systems to enable the generation of extensive data, allowing the vehicle, with the aid of AI, to perceive, understand, and make decisions akin to human drivers.
AI Perception Action Cycle in Autonomous Vehicles
When autonomous vehicles gather data from their surroundings and send it to the intelligent agent, a repeating loop called Perception Action Cycle is created. The intelligent agent then makes decisions based on this data, allowing the vehicle to take specific actions in its environment.
Now let’s break down the process into three main parts:
Part 1: Collection of In-Vehicle Data & Communication Systems
Numerous sensors, radars, and cameras are installed in autonomous vehicles to generate a large amount of environmental data. Together, these form the Digital Sensorium, enabling the vehicle to perceive the road, infrastructure, other vehicles, and surrounding objects. This data is then processed using super-computers, and secure data communication systems are used to transmit valuable information to the Autonomous Driving Platform.
Part 2: Autonomous Driving Platform (Cloud)
The cloud-based Autonomous Driving Platform contains an intelligent agent that utilizes AI algorithms to make decisions and act as the vehicle’s control policy. It is also connected to a database where past driving experiences are stored. This, combined with real-time input from the vehicle and its surroundings, enables the intelligent agent to make accurate driving decisions.
Part 3: AI-Based Functions in Autonomous Vehicles
Based on the decisions of the intelligent agent, the vehicle can detect objects on the road, navigate through traffic without human intervention, and reach its destination safely. Additionally, AI-based functional systems such as voice and speech recognition, gesture controls, eye tracking , and other driving monitoring systems are being integrated into autonomous vehicles.
These systems are designed to enhance user experience and ensure safety on the roads. The driving experiences from each ride are recorded and stored in the database to improve the intelligent agent’s decision-making in the future.
The Perception Action Cycle is a repetitive process. The more cycles that occur, the more intelligent the agent becomes, leading to greater accuracy in decision-making, especially in complex driving situations. With more connected vehicles, the intelligent agent can make decisions based on data generated by multiple autonomous vehicles.
Artificial intelligence, particularly neural networks and deep learning, is essential for the proper and safe functioning of autonomous vehicles. AI is driving the development of Level 5 autonomous vehicles, which won’t require a steering wheel, accelerator, or brakes.
An autonomous car can sense its environment and operate without human involvement. It doesn’t require a human passenger to take control of the vehicle at any time or even be present in the vehicle at all. An autonomous car can navigate anywhere a traditional car can and perform all the tasks of an experienced human driver.
The Society of Automotive Engineers (SAE) currently defines 6 levels of driving automation, ranging from Level 0 (fully manual) to Level 5 (fully autonomous). These levels have been adopted by the US Department of Transportation.
Autonomous vs. Automated vs. Self-Driving: What’s the Difference?
Instead of using the term “autonomous,” the SAE prefers “automated.” This choice is made because “autonomy” has broader implications beyond the electromechanical. A fully autonomous car would be self-aware and capable of making its own choices; for example , if you say “drive me to work,” the car might decide to take you to the beach instead. In contrast, a fully automated car would follow instructions and then drive itself.
The term “self-driving” is often used interchangeably with “autonomous,” but there’s a slight difference. A self-driving car can operate autonomously in some or all situations, but a human passenger must always be present and ready to take control. Self-driving cars fall under Level 3 (conditional driving automation) or Level 4 (high driving automation).
They are subject to geofencing, unlike a fully autonomous Level 5 car that could travel anywhere.
How Do Autonomous Cars Work?
Autonomous vehicles depend on sensors, actuators, sophisticated algorithms, machine learning systems, and robust processors to run software.
Autonomous cars generate and update a map of their surroundings using various sensors located in different parts of the vehicle. Radar sensors monitor nearby vehicle positions. Video cameras recognize traffic lights, read road signs, track other vehicles, and locate pedestrians. Lidar sensors bounce light pulses off the car’s surroundings to measure distances, detect road edges, and identify lane markings. Ultrasonic sensors in the wheels identify curbs and other vehicles during parking.
Advanced software processes all this sensory input, plots a path, and sends instructions to the car’s actuators, which manage acceleration, braking, and steering. The software utilizes hard-coded rules, obstacle avoidance algorithms, predictive modeling, and object recognition to comply with traffic rules and navigate obstacles.
What Are The Challenges With Autonomous Cars?
Fully autonomous (Level 5) cars are being tested in various areas of the world but are not yet available to the general public. We are still years away from that. The challenges encompass technological, legislative, environmental, and philosophical aspects. These are just a few of the uncertainties.
Lidar and Radar
Lidar is expensive and is still finding the appropriate balance between range and resolution. Would the lidar signals of multiple autonomous cars interfere with each other if they were to drive on the same road? Will the available frequency range support mass production of autonomous cars if multiple Are radio frequencies available?
Weather Conditions
How will autonomous cars perform in heavy precipitation? Lane dividers disappear when there is snow on the road. How will the cameras and sensors track lane markings if they are obscured by water, oil, ice, or debris?
Traffic Conditions and Laws
Will autonomous cars encounter issues in tunnels or on bridges? How will they fare in bumper-to-bumper traffic? Will autonomous cars be restricted to a specific lane? Will they have access to carpool lanes? What about the fleet of traditional cars sharing the road for the next 20 or 30 years?
State vs. Federal Regulation
The regulatory process in the US has shifted from federal guidance to state-by-state mandates for autonomous cars. Some states have proposed a per-mile tax on autonomous vehicles to prevent the rise of “zombie cars” driving around without passengers. Lawmakers have also drafted bills stipulating that all autonomous cars must be zero-emission vehicles and have a panic button installed. Will the laws differ from state to state? Will you be able to cross state lines with an autonomous car?
Accident Liability
Who is responsible for accidents caused by an autonomous car? The manufacturer? The human passenger? The latest blueprints indicate that a fully autonomous Level 5 car will not have a dashboard or a steering wheel, so a human passenger would not have the option to take control of the vehicle in an emergency.
Artificial vs. Emotional Intelligence
Human drivers rely on subtle cues and non-verbal communication to make split-second judgment calls and predict behaviors. Will autonomous cars be able to replicate this connection? Will they have the same life-saving instincts as human drivers?
What Are The Benefits Of Autonomous Cars?
The potential scenarios for convenience and quality-of-life improvements are endless. The elderly and the physically disabled would gain independence. If your children were at summer camp and forgot their bathing suits and toothbrushes, the car could bring them the forgotten items. You could even send your dog to a veterinary appointment.
But the primary promise of autonomous cars lies in the potential to significantly reduce CO2 emissions. Experts identified in a recent study three trends that, if adopted concurrently, would unleash the full potential of autonomous cars: vehicle automation, vehicle electrification, and ridesharing.
By 2050, these “three revolutions in urban transportation” could:
- Reduce traffic congestion (30% fewer vehicles on the road)
- Cut transportation costs by 40% (in terms of vehicles, fuel, and infrastructure)
- Improve walkability and livability
- Free up parking lots for other uses (schools, parks, community centers)
- Reduce urban CO2 emissions by 80% worldwide
What is a self-driving car?
A self-driving car, sometimes referred to as an autonomous car or driverless car, is a vehicle that utilizes a combination of sensors, cameras, radar, and artificial intelligence (AI) to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be capable of navigating to a predetermined destination without human intervention.
The potential impact of self-driving cars on future roadways and transportation industries is significant. For instance, they could potentially decrease traffic congestion, reduce the number of accidents, and facilitate the emergence of new self-driving ride-hailing and trucking services.
Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen, and Volvo are among the companies that are developing and testing autonomous vehicles. Waymo, a self-driving car test project by Google’s parent company Alphabet Inc., utilizes a fleet of self-driving cars, including a Toyota Prius and an Audi TT, to navigate hundreds of thousands of miles on streets and highways.
Self-driving car systems are powered by AI technologies. Developers of self-driving cars leverage extensive data from image recognition systems, as well as machine learning and neural networks, to construct autonomous driving systems.
Neural networks identify patterns within the data and feed them to machine learning algorithms, which are sourced from a variety of sensors, such as radar, lidar, and cameras. These sensors gather data utilized by the neural network to learn and recognize elements within the driving environment, including traffic lights, trees, curbs, pedestrians, and street signs.
An autonomous car employs an array of sensors to detect nearby vehicles, pedestrians, curbs, and signs.
The self-driving car constructs a map of its environment to comprehend its surroundings and plans its route. It must ascertain the safest and most efficient routes to its destination while adhering to traffic regulations and implementing obstacle avoidance. Geofencing, a concept that assists vehicles with self-driving capabilities in navigating predefined boundaries, is also employed.
In automotive applications, geofencing is often used for fleet management, vehicle tracking, and enhancing driver safety. This involves creating virtual boundaries, or geofences, around specific geographic areas using Global Positioning System (GPS) or other location-based technology. These boundaries can trigger automated actions or alerts when a vehicle enters or exits the defined area.
Waymo utilizes a combination of sensors, lidar, and cameras to identify and predict the behavior of objects around the vehicle. This occurs within a fraction of a second. The system’s maturity is crucial; the more the system operates, the more data is integrated into its deep learning algorithms, enabling it to make more refined driving decisions.
The operation of Waymo vehicles is detailed below:
– Input a destination by the driver or passenger, and the car’s software computes a route.
– A rotating, roof-mounted lidar sensor monitors a 60-meter range around the car and generates a dynamic three-dimensional map of the car’s immediate environment.
– A sensor on the left rear wheel tracks lateral movement to determine the car’s position relative to the 3D map.
– Radar systems in the front and rear bumpers calculate distances to obstacles.
– AI software in the car is linked to all the sensors and collects input from Google Street View and in-car video cameras.
– AI mimics human perceptual and decision-making processes through deep learning and controls actions in driver control systems, such as steering and brakes.
– The car’s software references Google Maps for advanced information on landmarks, traffic signs, and lights.
– An override function is available to allow a human to take over vehicle control if needed.
The Waymo project is an example of a nearly fully autonomous self-driving car. A human driver is still necessary but only to intervene when required. Although not entirely self-driving in the purest sense, it can operate independently under ideal conditions and is highly autonomous.
Numerous vehicles currently available to consumers do not possess full autonomy due to various technological, regulatory, and safety considerations. Despite being credited with driving progress toward self-driving cars, Tesla faces obstacles, such as technological complexity, sensor constraints, and safety concerns, Despite offering self-driving features in many of its cars.
Many production cars today feature a lower level of autonomy but still have some self-driving capabilities.
Notable self-driving features include:
– Hands-free steering re-centers the car without the driver’s hands on the wheel, though the driver still needs to remain attentive.
– Adaptive cruise control (ACC) automatically maintains a chosen distance between the driver’s car and the vehicle ahead.
– Lane-centering steering intervenes when the lane driver crosses lane markings by guiding the vehicle toward the opposite marking automatically.
– Self-parking utilizes the car’s sensors to maneuver into a parking space with minimal or no driver input, handling steering, acceleration, and guidance automatically.
– Highway driving assist combines various features to assist drivers during highway travel.
– Lane-change assistance monitors the surrounding lane traffic of a vehicle in order to aid the driver in safely changings. This feature can either provide alerts or steer the vehicle automatically in safe conditions.
– Lane departure warning (LDW) notifies the driver if the vehicle begins to change lanes without signaling.
– Summon is a feature found in Tesla vehicles that can independently navigate out of a parking space and travel to the driver’s location.
– Evasive-steering assist steers the vehicle automatically to help the driver in avoiding an impending collision.
– Automatic emergency braking (AEB) recognizes imminent collisions and applies the brakes with the aim of preventing an accident.
Various car manufacturers offer a combination of these autonomous and driver assistance technologies including the following:
- Audi’s Traffic Jam Assist feature assists drivers in heavy traffic by assuming control of steering, acceleration, and braking.
- General Motors’ Cadillac brand provides Super Cruise for hands-free driving on highways.
- Genesis learns the driver’s preferences and implements autonomous driving that mirrors these behaviors.
- Tesla’s Autopilot feature offers drivers with LDW, lane-keep assist, ACC, park assist, Summon and advanced self-driving capabilities.
- Volkswagen IQ Drive with Travel Assist includes lane-centering and ACC.
- Volvo’s Pilot Assist system offers semi-autonomous driving, lane-centering assist, and ACC.
Levels of autonomy in autonomous vehicles
The Society of Automotive Engineers, known as SAE, establishes the following six levels of driving automation as follows:
Level 0: No driving automation. The driver executes all driving operations.
Level 1: Driver assistance. This level facilitates driver assistance, in which the vehicle can aid with steering, accelerating, and braking, but not concurrently. The driver must also remain engaged.
Level 2: Partial driving automation. This level involves partial automation, where two or more driving automated functions can operate simultaneously. The vehicle can control steering, accelerating, and braking, but the driver must remain vigilant and be prepared to regain control at any time .
Level 3: Conditional driving automation. The vehicle can drive independently in specific scenarios. It can perform all driving tasks in scenarios such as driving on specific highways. The driver is still responsible for taking control when necessary.
Level 4: High driving automation. The vehicle can self-drive in certain scenarios without driver input. Driver input is optional in these scenarios.
Level 5: Full driving automation. The vehicle can self-drive under all conditions without any driver input.
The US National Highway Traffic Safety Administration (NHTSA) defines a similar level of driving automation.
Uses for self-driving vehicles
As of 2024, carmakers have achieved Level 4. Manufacturers must overcome various technological milestones, and several crucial issues must be addressed before fully autonomous vehicles can be commercially acquired and used on public roads in the US although vehicles with Level 4 autonomy are not available for public use, they are being employed in other capacities.
For instance, Waymo collaborated with Lyft to offer a fully autonomous commercial ride-sharing service named Waymo One. Customers can hail a self-driving car to transport them to their destination and provide feedback to Waymo. The cars still include a safety driver in case the ADS needs to be overridden. The service is offered in the Phoenix metropolitan area; San Francisco; Los Angeles; and Austin, Texas.
Autonomous street-cleaning vehicles are also being manufactured in China’s Hunan province, meeting the Level 4 prerequisites for independently navigating a familiar environment with limited novel situations.
Projections from manufacturers vary on when widespread availability of Level 4 and 5 vehicles will be achieved. A successful Level 5 vehicle must be able to react to novel driving situations as well as or better than a human can. Similarly, approximately 30 US states have enacted legislation on self-driving vehicles. Laws differ by state, but they typically cover aspects such as testing, deployment, liability, and regulation of autonomous vehicles.
The advantages and disadvantages of autonomous cars
Autonomous vehicles are a culmination of various technical complexities and accomplishments that continue to improve over time. They also come with many anticipated and unanticipated benefits and challenges.
Benefits of self-driving cars
The primary benefit championed by proponents of autonomous vehicles is safety. A US Department of Transportation and NHTSA statistical projection of traffic fatalities for 2022 estimated that 40,990 people died in motor vehicle traffic accidents that year — of those fatalities, 13,524 were alcohol-related. Self-driving cars can eliminate risk factors, such as drunk or distracted driving, from the equation. However, self-driving cars are still susceptible to other factors, such as mechanical issues, that can cause accidents.
In theory, if most vehicles on the roads were autonomous, traffic would flow smoothly and there would be reduced traffic congestion. In fully automated cars, the occupants could engage in various activities without having to pay attention to driving.
Self-driving trucks have undergone testing in the United States and Europe, enabling drivers to use autopilot for long distances. This allows drivers to rest or attend to other tasks, improving driver safety and fuel efficiency through truck platooning, which utilizes ACC, collision avoidance systems, and vehicle-to-vehicle communication for cooperative ACC.
Despite the potential benefits, there are some downsides to self-driving cars. Riding in a vehicle without a human driver at the wheel might initially be unsettling. As self-driving features become more common, human drivers might overly depend on autopilot technology instead of being prepared to take control in the event of software failures or mechanical issues.
According to a Forbes survey, self-driving vehicles are currently involved in twice as many accidents per mile compared to non-self-driving vehicles.
For instance, in 2022, Tesla faced criticism after a video showed a Tesla car crashing into a child-sized dummy during an auto-brake test. There have been numerous reports of Tesla cars being involved in crashes while in full self-driving mode. In one such incident in 2023, a Tesla Model Y in full self-driving mode hit a student who was stepping off a bus. Although the student initially sustained life-threatening injuries, they were upgraded to good condition a few days after the incident.
Other challenges of self-driving cars include the high production and testing costs as well as the ethical considerations involved in programming the vehicles to react in different situations.
Weather conditions also pose a challenge. Environmental sensors in some vehicles might be obstructed by dirt or have their view hindered by heavy rain, snow, or fog.
Self-driving cars face the task of recognizing numerous objects in their path, ranging from debris and branches to animals and people. Additional road challenges include GPS interference in tunnels, lane changes due to construction projects, and complex decisions such as where to stop to give way to emergency vehicles.
The systems must make rapid decisions on whether to slow down, swerve, or continue normal acceleration. This ongoing challenge has led to reports of self-driving cars hesitating and swerving unnecessarily when objects are detected on or near the road.
This issue was evident in a fatal accident in March 2018 involving an autonomous car operated by Uber. The company reported that the vehicle’s software identified a pedestrian but dismissed it as a false positive, failing to swerve to avoid hitting her. Following the crash, Toyota temporarily halted the testing of self-driving cars on public roads and continued evaluations in its test facility. The Toyota Research Institute created a new 60-acre test facility in Michigan to further advance automated vehicle technology.
Crashes also raises the issue of liability, as legislators have yet to define who is responsible when an autonomous car is involved in an accident. There are also significant concerns about the potential for the software used to operate autonomous vehicles to be hacked, and automotive companies are addressing cybersecurity risks.
In the United States, car manufacturers must comply with the Federal Motor Vehicle Safety Standards issued and regulated by NHTSA.
In China, car manufacturers and regulators are pursuing a different approach to meet standards and make self-driving cars a common feature. The Chinese government is reshaping urban environments, policies, and infrastructure to create a more accommodating setting for self-driving cars.
This includes formulating guidelines for human mobility and enlisting mobile network operators to share the processing load needed to provide self-driving vehicles with the necessary navigation data. The autocratic nature of the Chinese government allows for this approach, bypassing the legalistic processes that testing is subjected to in the United States.
The advancement toward self-driving cars began with gradual automation features focusing on safety and convenience before the year 2000, including cruise control and antilock brakes. Following the turn of the millennium, advanced safety features such as electronic stability control, blind-spot detection, and collision and departure warnings were introduced in lane vehicles. Between 2010 and 2016, vehicles began incorporating advanced driver assistance capabilities such as rearview video cameras, automatic emergency brakes, and lane-centering assistance, according to NHTSA.
Since 2016, self-driving cars have progressed toward partial autonomy, featuring technologies that help drivers stay in their lane, as well as ACC and self-parking capabilities.
In September 2019, Tesla introduced the Smart Summon feature that allowed Tesla vehicles to maneuver through parking lots and reach the owner’s location without anyone in the car. In November 2022, Tesla revealed that its Full-Self Driving feature was in beta. Although it’s now out of beta testing and still called Full Self-Driving, it is not a true self-driving feature, functioning only as a Level 2 autonomous system. It offers advanced driver assistance features but still necessitates the driver to remain alert at all times.
Currently, new cars are being launched with capabilities such as ACC, AEB, LDW, self-parking, hands-free steering, lane-centering, lane change assist, and highway driving assist. Fully automated vehicles are not yet publicly accessible and may not be for several years. In the United States, the NHTSA gives federal guidance for introducing a new ADS onto public roads. As autonomous car technologies progress, so will the department’s guidance.
In June 2011, Nevada became the first jurisdiction globally to permit driverless cars to undergo testing on public roads. Since then, California, Florida, Ohio, and Washington, DC, have also permitted such testing. About 30 US states have now enacted laws regarding self-driving vehicles.
The history of driverless cars dates back much further. Leonardo da Vinci created the first design around 1478. Da Vinci’s “car” was crafted as a self-propelled robot powered by springs, featuring programmable steering and the capability to follow predetermined routes.
Self-driving cars are intricate and incorporate numerous interconnected systems. Discover how AI aids in driving for autonomous vehicles.
Some Question for Future EV
The primary question regarding the future of car transportation is whether we will keep buying and owning vehicles, or if we will simply rent them as needed.
This question brings into conflict the views of traditional car manufacturers like GM with those of companies such as Waymo, Didi, and AutoX. As the autonomous driving industry evolves, we observe Tesla advancing its driving technologies, with Elon Musk asserting that these innovations will soon enable him to manage a fleet of robotic taxis, thereby justifying his company’s market valuation. Conversely, companies like Waymo, Didi, and AutoX are already running fully autonomous cab fleets in various cities across the US, China, and Russia. Some established companies like Volvo are also aiming to either operate such fleets or provide autonomous vehicles for competitors.
On the other hand, GM plans to sell autonomous vehicles directly to consumers by around 2030. This approach overlooks the reality that cars are heavily underutilized, as they typically spend only about 3% of the time being driven and the remaining 97% are parked. However, as we know, people have a tendency to desire ownership, even for items they use infrequently, such as private swimming pools.
If individuals are going to purchase vehicles equipped with autonomous driving features, the costs associated with sensors and the technology employed must continue to decrease. LiDAR sensors, for example, have seen dramatic reductions in both price and size, dropping from a costly “spinning KFC bucket” at around $75,000 in 2015 to today’s versions that can be found for about $100, comparable to the size of a soda can or even smaller.
Meanwhile, Volkswagen is exploring subscription models, with an estimated cost of approximately $8.5 per hour. Beginning in the second quarter of 2022, Volkswagen expects to offer owners of its ID.3 and ID.4 electric vehicles some subscription options, like enhanced range, additional features, or entertainment systems for use during charging, which would be charged by the hour. This concept is intricate, as it involves the manufacturer from whom you bought your vehicle enabling or disabling specific features, which could lead users to feel that they are being denied access rather than granted when they pay, a notion not typically associated with car ownership.
Regardless, Volkswagen envisions a future where vehicle ownership remains prevalent. The challenge with this scenario, however, is not merely about technology availability, pricing, or business models, but rather about urban planning: simply replacing existing vehicles with autonomous ones wouldn’t resolve the issues of traffic congestion.
Many individuals would still rely on their vehicles for errands, school runs, or evading parking fees by leaving cars on the street to circulate, contributing to increased road usage rather than alleviating it. Instead, the focus should shift towards reducing the number of cars on the road and transforming urban spaces into pedestrian-friendly areas, enhancing public transport and micro-mobility options, which would lead to an improved quality of life. This approach envisions autonomous transportation as a service rather than the prevailing model of ownership.
To most traditional car manufacturers, any strategy that diverges from individual ownership poses a threat to sales. A service model relying on fleets constitutes an entirely different business paradigm, one that they lack experience in. The consequences extend beyond the future of car manufacturers, influencing the kind of urban environment we aspire to have. It is not solely the traditional car manufacturers that need to be pushed into the future; we, as consumers, have developed a fondness for our cars, often viewing them as status symbols.
Transitioning from individual ownership to a transport-as-a-service framework would necessitate widespread availability, competitive pricing, and flexibility that transcends conventional models (for instance, many of us acquire an oversized vehicle meant for daily commutes despite only needing it for a few trips per year). Therefore, it is essential to consider both efficiency and sustainability.
Will we manage to shift towards a transport-as-a-service framework, or will we still find ourselves purchasing vehicles decades from now, utilizing them barely 3% of the time? Will traditional automobile companies emerge victorious, or will practicality ultimately prevail?
“The technology is effectively available… We possess machines capable of making numerous rapid decisions that could significantly decrease traffic fatalities, substantially enhance the efficiency of our transportation infrastructure, and aid in addressing issues like carbon emissions that contribute to global warming.”
Surprisingly, this observation didn’t come from a visionary like Elon Musk, Mark Zuckerberg, or Jeff Bezos; rather, it was President Obama talking about self-driving vehicles in a WIRED interview last fall.
Over the past year, there have been several groundbreaking developments related to autonomous cars, including Ford elevating its autonomous vehicle leader to the CEO position, Tesla facing an NHSTA investigation that revealed a 40 percent reduction in accidents with Autopilot activated, and Audi launching mass sales of a “Level 3” self-driving vehicle.
However, many issues surrounding autonomous vehicles still lack answers. How will self-driving cars navigate ethical dilemmas, such as the “trolley problem”? How will urban areas, roadways, and parking situations transform? What will become of the millions of people working as ridesharing drivers or long-haul truck drivers? What is the optimal configuration of sensors for autonomous vehicles?
We believe that numerous unresolved questions about self-driving cars will not only be solved through technological advancements but also through the emerging business frameworks surrounding these vehicles. For instance, if regulators opt to impose a tax on self-driving vehicles based on the miles driven within a city, this could create varying incentives for vehicles to remain in close proximity to optimize trips and minimize expenses. If automotive companies choose to sell directly to fleet operators instead of individual consumers, it will alter how they allocate marketing and research and development resources.
The fundamental business models and profit incentives will serve as key indicators of how companies will navigate various technological, business, and societal challenges.
What might the operating system for autonomous vehicles look like? Apple, Google, and Microsoft — iOS, Android, and Windows. The three largest companies in the world today all possess (or effectively control) their own operating systems. Why is this? Because holding control over an operating system is an extremely vital position within a value chain. Operating system providers create an abstraction layer over hardware (thus commoditizing hardware suppliers) and establish a direct connection to end-users (thereby allowing them to charge anyone else wanting to reach those end-users).
In the realm of servers, desktops, laptops, smartphones, and tablets, each of these three companies has a unique strategy for capturing value from their operating system. Apple leverages its operating system to achieve higher profit margins on its hardware, Google utilizes its operating system to generate more revenue from its advertising business, and Microsoft directly charges for its operating system and the essential applications that operate on top of it.
At present, automakers and tech firms are competing to develop the software that will power self-driving cars, but it remains uncertain how these companies will generate revenue from their software. Tesla is adopting an Apple-like strategy, aiming to construct a comprehensive hardware-software integration; firms like Baidu and Udacity are creating “open-source” self-driving car technology to facilitate the sale of complementary products; while companies such as Mobileye and Uber seem to be forming partnerships where they will serve as software providers to car manufacturers.
It’s probable that multiple models will arise to monetize the vehicle operating system layer, and these models will profoundly influence how different companies allocate resources for R&D, marketing, lobbying, and operational activities. If the Tesla model of vertical integration prevails, continue to expect eye-catching marketing and stylish vehicles, as high-priced, high-margin vehicle sales will be the primary business driver. Alternatively, if the Baidu “open source” model gains traction, anticipate a surge in low-cost automobiles from various manufacturers, with Baidu monetizing their open-source software by offering additional services.
Some of these implications are straightforward, yet there are also less apparent effects. For instance, companies that maintain a “closed” hardware/software ecosystem may be disinclined to share their data with others, potentially leading to challenges in establishing a national legislative framework for autonomous vehicles due to public apprehensions about safety and equity. Furthermore, if a single company establishes a significant lead yet is reluctant to share its data or algorithms, it might influence regulations in a manner that complicates the ability of others to develop competing systems.
How will consumers finance their transportation?
Will it be through services or personal vehicles? Currently, firms like BMW are making various predictions regarding the future of transportation use. BMW continues to sell cars directly to consumers, but they are also offering “transportation as a service,” enabling users to rent free-floating cars, request rides with drivers, or eventually summon autonomous vehicles. They believe that people will prefer to access transportation differently depending on the time and location, and they aim to present all these options via a single application.
Conversely, companies such as Mazda are convinced that consumers will always desire to drive, and they are focused on creating and selling vehicles to a “core customer who enjoys driving.”
These two perspectives are not necessarily conflicting, as different market segments will have diverse demands. However, the relative sizes of the transportation-as-a-service and the “owning a car” markets are expected to evolve, likely leading more individuals to favor on-demand transportation over car ownership, which often results in an underutilized asset.
As we shift towards a transportation-as-a-service model, the operational strategies of car manufacturers will also transform. Presently, automobile manufacturers are the largest advertisers in the entire industry. If consumers stop purchasing cars and instead opt for rides with services like Uber or rentals from Zipcar, it will significantly alter the billions spent on car advertising. Additionally, this will change the profit distribution throughout the automotive sector.
If ridesharing companies succeed in making vehicles interchangeable so that consumers become indifferent to the type of car used to travel from point A to point B, they will be able to capture a substantial share of the transportation industry’s profits and reinvest those earnings into their technological platforms and marketplaces.
What implications arise if ridesharing firms increasingly divert revenue and profits from manufacturers of cars and trucks? One major consequence would be that ridesharing companies might prioritize investment in automation to reduce expenses rather than seek ways to employ drivers (who are likely car buyers), thereby potentially accelerating the decline of driving jobs. Another significant outcome could be that car dealerships become less important as sales channels since ridesharing firms might choose to purchase vehicles in bulk from manufacturers to cut costs.
Who creates the data? Who manages the data? And who holds ownership of the data? Autonomous vehicles will not only produce but also consume a vast amount of data. Vehicles require driving information to train their neural networks, mapping data for road navigation and obstacle avoidance, regulatory information to follow speed limits and parking laws, and passenger data to create personalized travel experiences tailored to individual riders. Simultaneously, autonomous vehicles will generate terabytes of data daily from various sensors, such as cameras, radar, lidar, sonar, and GPS, which can be leveraged to enhance the vehicles’ driving models, assist city traffic planning, or optimize routes for ridesharing companies.
This data generation and consumption will necessitate new infrastructure and software, as well as different business models for data processing, sharing, and utilization. We have already witnessed several companies forming partnerships to either obtain access to or establish high-definition mapping data, which is critical to operate autonomous vehicles. Another vital aspect of the data equation is entities that employ human intelligence to produce training data for machines. For the foreseeable future, these “human-in-the-loop” systems will play a crucial role in generating high-quality training data and feedback loops.
The questions of data ownership, access rights, and processing methodologies will be pivotal for companies and regulators in the upcoming years. As vehicles generate and utilize increasing volumes of data, it will be essential to monitor who controls that data and how they opt to monetize it. It is likely that a number of significant companies will emerge, focused solely on the collection and refinement of data, and the collaborative dynamics between these firms and others in the automotive sector are currently under review.
In the conventional landscape of desktop and mobile operating systems, these systems can derive value by commoditizing hardware suppliers and aggregating consumers, providing other application developers with easy access to a user-friendly development platform and a distribution network that reaches a substantial audience of potential customers.
In the automotive sector, this indicates that companies like Uber and Lyft are well-positioned to serve as the primary hub for demand-side aggregation and supply-side commoditization. Ridesharing users are generally indifferent to the specific vehicle they travel in, and these companies act as an aggregation point for individuals looking to access a variety of transportation options. Lyft’s recent announcement regarding the creation of a self-driving division and system for car manufacturers implies they view this as a significant opportunity.
Nonetheless, this industry is still in its infancy, and a range of participants—from automotive suppliers like Delphi to tech giants such as Alphabet—are eager to ensure they secure a part of the transportation value chain. This could manifest in several ways, such as Tesla potentially creating a cohesive supply chain from components to rides that optimizes user experience, or Ford possibly discovering a method to deliver the most effective driving software that every other manufacturer may need to license.
Companies that offer higher-level services to both consumers and businesses and effectively unite supply and demand are likely to generate the greatest value and profit margins.
Regardless of the outcome, the victor in this competition for profit will have the capacity to invest more in research, enhance marketing efforts, and maintain a pace of innovation that outstrips rivals. This outcome will enable the winners to influence public discourse surrounding autonomous vehicles, steer industry recommendations on tax policies, and collaborate closely with local, state, and federal officials to reshape urban environments and society.
What is the influence and responsibility of regulators in the evolution of autonomous vehicles?
Technology firms historically have not excelled in collaboration with regulators (or automotive manufacturers), and while platforms like Airbnb and Uber have grappled with this dynamic, automakers, more than any other sector, have a track record of cooperating with government entities to comprehend (and potentially shape) regulations and compliance.
Regulators should be an essential component in the development and rollout of autonomous vehicles. It will be challenging to find a harmony between allowing the industry to lead regulatory approaches and permitting regulation to dictate industry innovations, but achieving this balance could result in significant advantages such as decreased traffic fatalities, reduced emissions, and improved transportation for all.
The transition from human-driven to autonomous vehicles will not occur overnight. For a considerable duration, vehicles operated by humans and those driven autonomously will coexist, which is a reality that regulators must consider.
If there’s one aspect that both the public and regulators should focus on over the next three to five years, it’s how companies intend to generate revenue from autonomous vehicles. The prevailing business models will influence decision-making, and these choices will have critical implications for the future of transportation.