A self-driving car killed a pedestrian. What now?
by Xavier Symons | 25 Mar 2018 |
A self-driving Uber vehicle has hit and killed a pedestrian in United States, raising concerns about the regulation of new autonomous vehicle (AV) technology.
Elaine Hertzberg was struck down late on Sunday night in Tempe, Arizona, after accidently stepping in front of an AV vehicle -- an Audi SUV -- travelling at approximately 60 kph.
The vehicle’s front sensor failed to detect the woman, while a safety driver present in the car was not watching the road.
Uber has announced an immediate halt on its AV trials across North America after the incident, and police are investigating. Legislation currently being discussed in Washington will look to introduce federal safety standards for the use of AV technology.
Some analysts suggest that regulations of self-driving vehicles need to be stronger. “Moving too quickly could put lives at risk and set back a technology that could ultimately help reduce the number of people killed and injured on the roads each year”, wrote Will Knight of the MIT Technology Review.
Others suggested that fatalities are inevitable -- albeit far less likely -- with self-driving vehicles. The Economist argued that, while self-driving vehicles will reduce the number of fatalities, “the sad truth is that there are bound to be fatal accidents on the road to a driverless world”.
These robot vehicles have a link to bioethics, especially to the well-known utilitarian trolley problem. After all, someone has to program them to make “decisions” about what to do when faced with conflicting choices.
Last year the German Federal cabinet adopted 20 recommendations by the Ethics Commission on Automated Driving, as a basis for the local car industry to advance its driverless technology. But it noted that “at the level of what is technologically possible today […] it will not be possible to prevent accidents completely. This makes it essential that decisions be taken when programming the software of conditionally and highly automated driving systems.”
Elaine Hertzberg was struck down late on Sunday night in Tempe, Arizona, after accidently stepping in front of an AV vehicle -- an Audi SUV -- travelling at approximately 60 kph.
The vehicle’s front sensor failed to detect the woman, while a safety driver present in the car was not watching the road.
Uber has announced an immediate halt on its AV trials across North America after the incident, and police are investigating. Legislation currently being discussed in Washington will look to introduce federal safety standards for the use of AV technology.
Some analysts suggest that regulations of self-driving vehicles need to be stronger. “Moving too quickly could put lives at risk and set back a technology that could ultimately help reduce the number of people killed and injured on the roads each year”, wrote Will Knight of the MIT Technology Review.
Others suggested that fatalities are inevitable -- albeit far less likely -- with self-driving vehicles. The Economist argued that, while self-driving vehicles will reduce the number of fatalities, “the sad truth is that there are bound to be fatal accidents on the road to a driverless world”.
These robot vehicles have a link to bioethics, especially to the well-known utilitarian trolley problem. After all, someone has to program them to make “decisions” about what to do when faced with conflicting choices.
Last year the German Federal cabinet adopted 20 recommendations by the Ethics Commission on Automated Driving, as a basis for the local car industry to advance its driverless technology. But it noted that “at the level of what is technologically possible today […] it will not be possible to prevent accidents completely. This makes it essential that decisions be taken when programming the software of conditionally and highly automated driving systems.”
The tragic death of a Florida woman struck by a driverless Uber has revived public interest in robot ethics. How do these cars make decisions in life and death situations? Are they transparent enough about the standards?
Such questions will be asked more and more as the age of autonomous vehicles approaches. Perhaps you could program it yourself. Highly Ethical Cars would take almost no risks and take two hours to get to work. Minimally Ethical Cars would run red lights and get there in five minutes. It’s going to be an interesting debate.
Michael Cook Editor BioEdge |
NEWS THIS WEEK
by Xavier Symons | Mar 25, 2018
Is collateral damage inevitable on the road to a driverless world? by Michael Cook | Mar 25, 2018
After Australia, the UK, and the US, it pops up in the Netherlands by Michael Cook | Mar 25, 2018
More bariatric surgery, lots more by Michael Cook | Mar 25, 2018
When are they complicit in the actions of oppressors? by Michael Cook | Mar 25, 2018
Bill to be debated in the island’s Parliament in May by Michael Cook | Mar 25, 2018
Leading US bioethicist fears for the future BioEdge
Phone: +61 2 8005 8605
Mobile: 0422-691-615
No hay comentarios:
Publicar un comentario