Monday, November 5, 2018

Reading the Research: The Morals of Autonomous Cars

Welcome back to Reading the Research, where I trawl the Internet to find noteworthy research on autism and related subjects, then discuss it in brief with bits from my own life, research, and observations.

Today's article talks about a technology of the near-future that will serve as assistive technology for autistic people (and other people with special needs) as well as the general populace.  Self-driving cars are currently in existence, but US infrastructure in general isn't designed to support the widespread use of them.  Should that be possible, though, groups of people could buy into having a single self-driving car, or into a service that maintains those cars, and not have to worry about qualifying for a driver's license.  

It's a documented fact at this point that autistic people have a harder time getting our driver's licenses than the general populace.  Whether that's because the training programs simply aren't geared to us, because of brain differences that make the demands of driving challenging, or because of other side effects, the numbers are hard to ignore.  1 in 3 autistic people, versus roughly 90% of US adults.  So there's definitely a need for autonomous transportation.

The main sticking point for this technology has been how to handle crashes.  In cases where the automated system can't avoid probable fatalities (which won't happen often, but absolutely will happen), ethicists and programmers have to grapple with the question of who the system will sacrifice:  the pedestrian bystanders?  The driver and passengers in the other vehicle?  The passengers in the autonomous car?  It's not a pretty question, suffice it to say.

This study aims to quantify what the human answer to that question would be.  Using an online website, they made an online game with various situations and asked participants worldwide to answer them.  (You can participate here if you want to!)  I was kind of surprised to learn that humanity apparently only widely divides into three basic philosophical groups: Western, Eastern, and Southern.  But even then, the differences weren't super major.  Personally, that's encouraging to me.  Also encouraging, people weren't all that much more eager to sacrifice a jaywalker (mildly illegal) than they were a perfectly law-abiding pedestrian.

One would think you could just write the AI to make choices based on the vast majority of humanity's opinion of the situation, or the regional portion of humanity the operator is in.  But unfortunately, it won't be that simple, because at least in the US, people sue at the drop of a hat and no matter whether the decision was the most moral or most agreed-on decision, somebody will disagree.  Lives may be lost due to the decision.  The issue will inevitably end up in court.

I'm hopeful that we can manage to put together the infrastructure, court rulings, and such needed to make this technology a reality for everyone... but I'm not counting on it within the next 10 years.  Maybe 20?  Maybe longer.  Hopefully within my lifetime, anyway.  

No comments:

Post a Comment