According to international experts, the introduction of new road rules for connected and automated vehicles (CAVs) must be guided by an ethical framework developed by the government, road users, and other stakeholders.
They warn that, contrary to popular belief, strictly prohibiting CAVs of various types from violating existing traffic rules may jeopardize road safety. This, however, necessitates close scrutiny in order for these high-tech vehicles to fulfill their potential to reduce road casualties.
The rapid adoption of new technologies has revealed a new challenge. Human-driven vehicles are the focus of road safety regulations and traffic laws. As a result, policymakers may discover that ground-based robots, also known as Personal Delivery Devices (PDDs), are operating in a governance vacuum. Many of these PDDs are designed to drive in pedestrian areas or on a portion of the road, making it unclear whether they should be subject to vehicle regulations at all.
“While they promise to reduce road safety risk, CAVs, such as hybrid AI systems, can still create collision risk due to technological and human-system interaction issues, traffic complexity, interaction with other road users, and vulnerable road users,” writes UK transport consultant Professor Nick Reed of Reed Mobility in a new paper published in Ethics and Information Technology.
We propose that responsibility for developing the framework of CAV ethical goal functions be shared by an appropriate international body, such as the UNECE’s Global Forum for Road Traffic Safety, and relevant individual country agencies such as the Department of Transport.Dr. Leon Kester
“Ethical goal functions for CAVs would allow developers to optimize driving behaviors for safety under uncertain conditions while allowing for product differentiation based on brand values.”
This section is significant because it does not state that all vehicle brands must drive in the same manner, allowing for brand differentiation, according to the researchers. CAVs, including driverless cars, are already being used by transportation services around the world to provide new services and freight options to improve road safety, alleviate congestion, and increase drive comfort and transport system productivity. According to a recent European Commission report, CAVs may be required to violate strict traffic rules in order to reduce road safety risks and operate with appropriate transparency.
The key recommendation from the EC (Bonnefon et al, 2020) highlights the need to legislate for CAVs in various traffic and environmental conditions to exercise the equivalent of human discretionary behaviors, according to Professor Reed, Flinders University Dean of Law Professor Tania Leiman, and other European experts in the field of autonomous vehicle safety.
This is complicated even more when there are differences in laws between countries and regions, and cars are designed or built in one country but used in another, according to Flinders University’s Professor Leiman from the College of Business, Government, and Law.
“In a dangerous encounter, an automated system that has ‘deduced’ driving behavior from training examples cannot ‘explain’ or ‘justify’ its decisions or actions,” she says. “This could be an issue if a manufacturer is required to explain specific behavior in the event of an incident or if civil or criminal liability is disputed.”
In the research paper, speeding and mounting the curb to avoid a collision are also evaluated as case studies, with the idea that ethical goals should be established through extensive public consultation and deliberation in order to be publicly acceptable and understood.
According to the researchers, it is critical to have a standardized framework in place to allow vehicles traveling from one jurisdiction to the next to update their road rules in order to make driving standards safe, predictable, reasonable, uniform, comfortable, and explainable – for drivers, manufacturers, and all road users.
Driverless cars are just one example of artificial intelligence’s gradual and ongoing advancement, which has prompted numerous articles about the ethical concerns this all raises. The New York Times published a story on its website a few days ago titled “How Tech Giants Are Devising Real Ethics for AI,” in which it stated that “a memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September.”
“We propose that responsibility for developing the framework of CAV ethical goal functions be shared by an appropriate international body, such as the UNECE’s Global Forum for Road Traffic Safety, and relevant individual country agencies such as the Department of Transport,” says co-author Dr. Leon Kester, a senior research scientist at TNO in the Netherlands.
“Once an ethical goal function has been agreed upon and enacted by legislators,” Dr. Kester says, “CAV systems could be designed in such a way that they optimize with the highest utility for road users within predefined boundaries without having a predefined set of infinite scenarios and precise definitions on what to do.” “We also need to set up a socio-technological feedback loop where things can be evaluated and changed if they are no longer in line with our societal goals.”
- Four key areas SaaS startups must address to scale infrastructure for the enterprise
- This Seven-Question Test Can Determine How Wise You Are, Researchers Say
- An Uncrewed Drone has Refueled a Piloted Jet in Midair for the First Time
- Amazon reveals Halo Fitness and Halo Nutrition Programs for Halo Subscribers
- Uber The Ride Sharing Giant
- Caltech has Created a Novel Algorithm that Allows Autonomous Systems to Determine their Location Merely by Looking at the Environment Around Them