A new book by a Missouri S&T psychologist shares insights into the cost of disregarding artificial intelligence’s (AI) outcomes as being worth less than an actual person’s. These outcomes may be advice, evaluations of other people, harmful decisions, artistic creations or relational interactions.

The Machine Penalty: The Consequences of Seeing Artificial Intelligence as Less Than Human was recently published by Palgrave-Macmillan. In it, Dr. Daniel Shank, associate professor of psychological science at Missouri S&T, explains how people tend to diminish similar outcomes and decisions coming from AI compared to humans.
“For more objective tasks, like playing chess or analyzing data, we assume AIs will produce superior advice and outcomes,” says Shank. “However, for personal and subjective advice, like recommending food based on taste or giving us personalized advice on our career, we often prefer humans.”
Shank says that there are downsides to giving AI advice less consideration. He continues with the food example, saying “while an AI may have never tasted and experienced food, if it is drawing on all the information on the internet regarding food tastes, it might be superior to a random person’s advice.”
Shank also says it depends on how much information the AI has about you personally. He warns that like all advice, we should be careful and critically consider it, but discounting AI advice just because it’s from a machine really is a bias that might lead people to ignore potential good ideas. Similarly, he says, there may be times when AI make great decisions, evaluations, products, and conversation and judging those as less valuable means one could miss out on their advantages.
“In my book, I present the machine penalty as something that there is a lot of evidence for – that is it is an empirically demonstrated phenomenon,” says Shank. “I don’t consider the machine penalty as intrinsically good or bad, or even a prejudicial bias. We may have good valid reasons to prefer a human to an AI.
“Take art for example – if we only care about the finished piece of art, we shouldn’t care who or what made it,” says Shank. “But if someone you care about painted a picture for you, then its value is wrapped up in your relationship to them, not just in the finished product. So sometimes a penalty against machines is completely justified, and we wouldn’t want to overcome it.”
The book includes arguments based on theories such as Computers Are Social Actors, anthropomorphism and algorithm aversion. Shank then applies the machine penalty to five primary areas: giving advice, evaluating people, causing harm, producing art and providing companionship.
In regard to the effects of the machine penalty, Shank says that if the penalty feels more like prejudice, then it’s not beneficial to anyone – even if there is no moral issue with being prejudice against machines.
“If prejudice is the case, I don’t have any quick fixes to recommend,” says Shank. “But usually if one realizes that machines can do as well as humans in the aspects they care about, a penalty against the machines may fade over time.”
Shank specializes in the areas of social psychology and technology. His research primarily focuses on social psychological interactions with and perceptions of artificial intelligence, including morality, emotions, relationships, impressions, and behavior toward AIs. Shank holds a bachelor’s degree in computer science from Harding University, and master’s degrees in sociology and artificial intelligence and a Ph.D. in sociology, all from the University of Georgia.
About Missouri S&T
Missouri University of Science and Technology (Missouri S&T) is a STEM-focused research university of over 7,000 students located in Rolla, Missouri. Part of the four-campus University of Missouri System, Missouri S&T offers over 100 degrees in 40 areas of study and is among the nation’s top public universities for salary impact, according to the Wall Street Journal. For more information about Missouri S&T, visit www.mst.edu.