Home
 

by Cedric Hughes, Barrister & Solicitor with regular weekly contributions from Leslie McGuffin, LL.B.   

You are here

The Autonomous Vehicle Making Moral Decisions

Article Number: 
679

Self-driving cars, also called autonomous vehicles (AVs) will smooth the flow of traffic which, in turn, will reduce collective power train energy consumption and whatever emissions, if any, from engine tailpipes.

Perhaps the greatest promise is the AVs’ potential to almost eliminate crashes.  Indeed, most envisioning of a ‘zero crash future’ by traffic safety experts is based on driving guided by artificial intelligence rather than the unassisted human brain.

Neil Arason describes the advantages in his recently published work No Accident (2014, Wilfrid Laurier University Press) as follows: “…with shared artificial intelligence…Anything that happens to one car can be immediately shared for the benefit of every self-driving car in the intelligence grid: suddenly the car transporting you has the advantage of billions of kilometers of driving experience, even though it has just driven you off the sales lot…”

As Road Rules readers know, AV development is well underway.  Millions of algorithms have enabled prototype AVs to precisely locate themselves, detect obstacles, and plan their pathways over hundreds of thousands of miles.  But now the even harder slogging is beginning with programmers challenged to define algorithms for situations of unavoidable harm—hopefully rare, but not discountable—that will inevitably require ethical decision-making.

Such a dilemma could involve, for example, the need to choose between a pathway that will result in injury to others outside the AV and a pathway that will result in injury to the AV passengers.

A recent study by Jean-Francois Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon, and Iyad Rahwan of the Massachusetts Institute of Technology asked study participants what programming choice they would prefer in just such a situation.  The study also included varying some of the details such as the number of people outside the AV that could be saved, whether the decision to swerve should be wholly autonomous, and making these decisions from the perspective of being an AV occupant or an anonymous person.

The findings of the study were unsurprising, but inconclusive as a guide forward. Participants agreed that AVs should be programmed to minimize the death toll— an obvious starting point, called the ‘utilitarian’ approach.  But participants were not as confident, however, “that [AVs] would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian [AVs], more than they wanted to buy utilitarian [AVs] themselves.”

These results raise the issue of whether or not clear guidelines for when an AV must prioritize the life of a passenger or others could be set by regulation.  Researchers speculated that this solution could, however, discourage people from buying AVs, which “would delay the adoption of the new technology that would eliminate the majority of accidents.”

This problem is vexing.  The authors of this study pointed out that their work represents “the first few steps into what is likely to be a fiendishly complex moral maze.”  But, they added, these are problems that cannot be ignored:  “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

Cedric Hughes

huges & company law corporation vancouver

 

As Seen In

abbotsford mission times

chilliwack times

richmond review

surrey leader

vancouver courier.com

voiceonline.com

Admin

здесь cell phone listening software jobs in usa ссылка cell phone spyware reviews 2013 sms spy pro 4pda phone spy software iphone 4 where is cell phone location на сайте Блог о sitemap Drupal theme by mediafxgroup.com