A More Moral Artificial Intelligence?

Share This

With the rampant increase on reliance to technology in modern culture, Artificial Intelligence adoption and improvement have steadily increased as well. With the penetration that AI has seen in the technology market, there is an increasing desire to create an intelligence capable of moral reasoning.

The common picture painted for the need of a moralistic AI is that of a self-driving car faced with a difficult choice. When an eventual unfortunate circumstance on the road leads to a choice swerve into a group of adults walking on the sidewalk taking many lives, or continue straight forward and hit a minivan full of children most likely killing at least one.

 

What does artificial intelligence do?

 

Even humans have a difficult time making choices like this; often in opposition with each other, especially in the few short moments the human mind has to judge the consequences of both actions and the best response. Calling into question what is morality, and is there an absolute morality?

Moral judgments have been found in part by Vincent Conitzer a professor of computer science at Duke University to be affected by prior beliefs like rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features.

 

These moral biases evidently are not present within the controlled environment of an AI “consciousness” and therefore the reasoning behind an AI morality needs to be less reliant on personal experience.

Conitzer accounts for this in his experiment, by following a two path process: Having people make ethical choices in order to find patterns and then figuring out how that can be translated into an artificial intelligence. He elaborates:

“what we’re working on right now is actually having people make ethical decisions, or state what decision they would make in a given situation, and then we use machine learning to try to identify what the general pattern is and determine the extent that we could reproduce those kind of decisions.”

In short, the team is trying to find the patterns in our moral choices and translate this pattern into AI systems.

 

Once again another hill to mount…

 

A major problem with this method of determining morality is that our moral judgments are neither objective nor timeless or universal. Meaning morals tested 100 years ago would be considered barbaric in nature to us now — sexism, racism etc. being present in those time periods.

It would be reckless to assume that society has reached a moral apex and no growth can be achieved. Meaning simply that any AI we made today, in 100 years it could be considered a barbaric machine of a bygone age.

Leaving a machine capable of adapting to and learning from the morality of the time based on past and current experiences seems the only answer.

 

A distant possibility.

 

While these trials and studies endeavoring to create an efficient moral AI are still far from actually being enacted, the efforts put forward will undoubtedly be invaluable for building more efficient and accepted artificial intelligence.