James's Blog

Sharing random thoughts, stories and ideas.

Morality and Rationality

Posted: Jun 8, 2019
◷ 3 minute read

The Meat Paradox is a question often examined on the morals of eating meat: how can people care about animals, but also eat them? A dog owner can show deep affection for the canine, while enjoy eating steaks without much care for the bovine. The common explanation is along the psychological route. We are known for our ability to compartmentalize contradictory beliefs in our minds (arguably we rely on this ability for survival). So this logical dissonance in reasoning about eating meat does not really cause a big issue for most people, as the cognitive systems used for the cute dog and the delicious steak don’t activate simultaneously (unless we explicitly try to link them, which may result in some people becoming vegans). The fact that modern civilization has mostly hid the living, breathing cow behind the steak from us in every day lives is just the icing on the cake, and helps making our innate compartmentalization easier.

More broadly, this is just one of countless similar questions and explanations about morality. In many situations, people’s apparent belief or behavior seems contradictory to or inconsistent from some set of perfectly logical moral principles. And like in the Meat Paradox example above, in almost all such cases, the typical explanation comes down to some error on the human part, either a cognitive “glitch” (like our mental compartmentalization), or some form of ignorance (including not using reasoning). But what about thinking about it from the other side: why do we simply assume that our system of morality must be perfectly rational and logically consistent?

We tend to think about moral systems like mathematics, and so moral statements must not contradict one another, and form a consistent system of principles. Rightfully so, I think there are a lot of similarities between the two. Neither exist inherently in nature; both are created by humans based on certain observations in the world; both are systems of abstractions that are used to apply to real life situations to help us. But they are very different in the purposes of their applications. Arguably the main use of mathematics (besides as a purely intellectual endeavor on its own) is to aid us in the understanding of the natural world. But a system of morality’s main purpose arguably is to guide and modify human behavior, in order to steer society towards some desired state.

Because morality ultimately seeks to influence the human mind, and we know that our cognition is full of illogical flaws, I am not sure if a system of morality that is rationally consistent can serve that purpose for us (unless we somehow all become fully logical robots). Even if it technically can, there is the question of efficiency. Perhaps given the nature of our psychology, the most effective system to steer our behavior is a set of principles that are not internally logically consistent, but rather one that more closely resembles the way our minds function, that accounts for our cognitive “glitches”.

Then there is the question of theoretical feasibility. The rationalist community, along with the people concerned with AI risk, have been looking at the potential of codifying a system of morality or values rigorously, like a system of mathematics or formal program rules. But as we’ve known for the past few decades, such a complete and consistent system cannot exist in mathematics, as shown in the two Gödel’s incompleteness theorems. It is entirely possible that we will run into a similar theoretical limitation when trying to do the same for morality. Of course, this is unlikely to prevent whatever system we construct from being useful, just as how mathematics is still immensely useful despite what Gödel had proven.