In class this past week our biggest topic was probably the discussion on complex AI and the ethics of its development. This talk has formed and left me mulling over one major question, is it even possible to develop AI ethically?

In order to talk about this I’m going to use one contentious example of AI ethics in this post. The self Driving Car. Now the convenience offered by a self driving Car is fairly obvious. Gone will be the need for driving schools and human error on the road, speeding, and for the most part, traffic. However there is one moral dilemma that is at the center of the conversation, the Trolley Problem. More specifically, if in the event of needing to choose between the life of the driver and passengers and the life of a pedestrian(s) which should it chose? This becomes even more sticky when we think that an AI, can process and “think” much faster than a human can. While a human will more likely than not in this situation make a choice based on pure instinct, an AI certified for a commercially available self driving car would need to be able to consider all factors in the situation and make a definitive call.

People from all over the world have had wildly different answers to the problem, all stemming from their own cultures and creeds (Self Driving Cars: Why We can’t Expect them to be Moral). So why should Tesla or Ford be able to make the final call on this contentious decision? We’re at a moral deadlock with this situation that, in the binary process of an AI’s decision making, needs a definitive answer. And if we give that call to the corporations that make these cars, we effectively hand our entire safety while being around their products entirely to them. Personal agency in the situation becomes mute even if you don’t personally buy and operate a self driving car. After all it doesn’t matter if you’re driving, if you are a pedestrian the car could still one day hold your life in it’s metaphorical hands.

All this doesn’t even touch on the numerous other ethical questions involved with giving the agency that comes with driving over to AI and the corporations that make them that were discussed in group this week. How does a cop pull someone over if they are in a self driving car? Would police be given some device to force a car to pull over? Would corporations work in tandem with police to have sensors or cameras in self driving cars to pull over and police themselves? Doesn’t this then give corporations the ability to use program a car to stop on THEIR command? Just what kind of power would this give to police and corporations to control people’s lives? Let’s not even get into all the possible problems with racial profiling and just in general non race conscious design that could go into the programing of these AIs (White Supremacy and Artificial Intelligence).

These are all awfully difficult questions that I personally just don’t trust corporations to make at all, especially with how much little actual attention and power that give to their boards of ethics made to keep their dev. teams in check (The Problem With AI Ethics).

Ultimately I feel in situations like these AI, no matter how advanced we make it, should never be trusted to make a decision like this, not just because of how untrustworthy the companies that make them can be and how much power this would give them, but because it attempts to force a binary end all be all answer to a question that should never feel clear cut.

I feel the solution to a problem like this lies outside of AI and links up with a solution to another existential problem humanity is facing, environmental damage. The answer? Public transportation. Car’s already cause mass amounts of pollution, require immense resources to create, and are a nightmare for infrastructure planning. Public transportation, in the form of trains, will largely eliminate the need for cars, self driving or not, from the equation.

 

Sources:

The Problem with AI Ethics: https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-problem-big-tech

White Supremacy and Artificial Intelligence: https://www.yesmagazine.org/peace-justice/technology-racism-artificial-intelligence-white-supremacy-20190828

Self-driving cars: why we can’t expect them to be ‘moral’: https://phys.org/news/2019-01-self-driving-cars-moral.html