James's Blog

Sharing random thoughts, stories and ideas.

On Systems Thinking

Posted: Jul 25, 2020
◷ 3 minute read

Modern intellectuals tend to embrace systems thinking. This type of thinking reason about things holistically, at the systemic level. Objects and people are never analyzed in isolation, but within their relevant environments and contexts. Interactions between individual and groups of entities are considered, to take into account second order and beyond effects. This isn’t easy, as the complexity to deal with explodes exponentially as the number of relevant factors grow. But when it is done well, systems thinking often gives deeper insights into problems, and offers more effective solutions previously unseen.

However, there is one thing that bothers me about this kind of thinking, and that is its tendency to absolve people’s individual agency. It’s similar to classical economics, where people are seen as “utility maximizing rational agents”. In systems analysis, the forces of the system, often in the form of incentives, entirely dictate how people act. People litter on the street because there is no incentive for them to help keep public spaces clean (factories polluting the environment is the larger scale equivalent). People take shortcuts in their jobs and do low quality work, because there is no incentive for them to do any better (companies selling mediocre products because the buyers have no alternatives is the larger scale equivalent).

In systems thinking, individuals are almost always seen as some kind of intelligent self-serving automata, being pushed to act one way or another solely by the forces of the system, like rocks in the currents of a river. People’s agency, moral obligation, sense of responsibility, and character qualities are completely ignored. And it’s this sense of resignation that bothers me. It feels that systems thinkers have given up, because these automata are as good as people themselves can ever be, incentives and other systemic forces reign supreme. Yet people are not passive rocks to be pushed around by the currents. We have the ability to think and act independently, and can, if we choose to, behave in ways that go against what the environmental forces incentivize. Systems thinking tends to not only underplay this possibility, but actually discourages this from being improved by letting people “off the hook” too easily (bad behaviors can always be explained by some underlying bad incentive structure).

It seems to me that with this bias, systems thinkers are ignoring an additional way to fix a badly behaving system. By focusing solely on the top-down approach of fixing the suboptimal incentives, the bottom-up way, of improving the individuals’ sense of agency and responsibility to the whole, is mostly missed. Of course this is easier said than done, and I think the difficulty to do this individual level improvement is partly why systems thinking tends to ignore this angle: it does not even believe that you can fix things from the bottom. Various religions try to do this, some quite successful. But since the rise of secularism, we have not really found a new and effective way to do it. It is perhaps one of the biggest failings of modern secular rationalist thinking, and remains a huge open problem: how do we effectively educate, instil, and encourage an optimal set of moral principles in individuals in today’s world1?

This will never be solved as “cleanly” as systems thinkers would like, and it might be another reason that the approach has been ignored. There will always be bad actors, unlike structural, systemic solutions where you can almost entirely eliminate bad incentives with policy changes. Systems thinking tends to dislike these imperfect or unreliable solutions, but it is still very much worth pursuing. We should be tackling problems from both ends: top-down and bottom-up. We should set up systems as best as we can, while also encourage individuals to try to do the “right” thing even when we screw up with some systemic structures (and not dismiss bad behavior as solely caused by poor incentives). It will be a strictly better system, more immune to imperfections.


  1. A systems thinker will say: through setting up a set of appropriate incentives so people will do that naturally of course! But I don’t think this approach works (otherwise we probably would’ve done it successfully already). Things like morality and responsibility to the group as a whole are nebulous concepts that cannot be codified effectively in structures of incentives, which by nature have to be very well-defined. ↩︎