When machine “workers” are on 24 x 7 shifts, how can humans compete? When autonomous drones can achieve tasks without human intervention, what are our moral responsibilities?
In the rush to bring newer, smarter and more capable technologies to market, few are addressing the ethical and moral dilemmas that automation has raised. Psychology professor Joshua Greene, Director of the Moral Cognition Lab at Harvard University, however, is starting to relate his research about the brain and human morality to the world of IT and robotics.
At a February 18 seminar hosted by the MIT IDE, Greene noted that until recently, he didn’t fully make the connection between his own work and the long-term issues of Artificial Intelligence (AI). That intersection becomes very clear, however, when you think about the real-world issues of job displacement, how machines are programmed and what they are instructed to do. (More about Automated Ethics can be found here and here.)
The idea of machine intelligence displacing human labor--as discussed in MIT’s Erik Brynolfsson and Andrew McAfee’s book, The Second Machine Age--is no longer science fiction; “it’s not crazy,” Greene said.
Drawing on insights from his 2013 book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Greene explained that we react most strongly to harmful actions like punching someone in the face, where the harm is caused intentionally and directly, and the victim is an identifiable person. The social and moral challenges posed by advancing AI are different. If advanced AI puts millions of people out of work it won’t feel like intentionally punching someone—or a million people. The harm will be caused as an indirect side effect of doing something good. And those affected will be “statistical” people rather than identified individuals. It’s this mismatch between our moral psychology and the consequences at stake that makes modern moral problems so challenging.
Greene believes more focus is needed on critical problems like whether--and how—moral sensibilities can be programmed into autonomous machines such as military drones and self-driving cars. On a larger scale, societies have to re-imagine the world as one in which machines do more and more of the work currently done by humans. Technological advances may soon outpace our own moral sensibilities, according to Greene. “We’ll need to find new solutions.”
Joshua D. Greene is Professor of Psychology, a member of the Center for Brain Science faculty, and the director of the Moral Cognition Lab at Harvard University. He studies the psychology and neuroscience of morality, focusing on the interplay between emotion and reasoning in moral decision-making. His broader interests cluster around the intersection of philosophy, psychology, and neuroscience. He is the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.