James's Blog

Sharing random thoughts, stories and ideas.

Book Review: The Age of Surveillance Capitalism

Posted: Dec 6, 2019
◷ 17 minute read

I. Introduction

This is going to be a slightly unorthodox style book review.

The first section is structured as a fairly casual Q&A. Why this format? I have always dreamed of having a personal assistant (digital or human, though nowadays digital is looking more and more like the better option), who is familiar with how I think, and would read through the things that I don’t have the time to myself. Then I could simply converse with this assistant in a Q&A style exchange to learn about what they have read, in a much more condensed manner. This is my attempt at simulating such an interaction, and will serve as the summary portion of the review. The questions are based on the ones I had while reading the book. The answers are given from the perspective of the book, personified, and more or less devoid of any of my own opinions (though they are still based on my interpretations of the author).

The second section contains my own thoughts and commentaries on the book. Most of these are directed at the more philosophical arguments that the author has made in the last third of the book. It contains a blend of ideas from both the author and myself, and at times can be difficult to separate. Being in the technology field myself, I have to admit that I went into the book as a skeptic. But the author makes some excellent points at the foundational levels of thinking that made me ponder a lot about the potential future that we are heading into.


II. Q&A

Q: First things first, what is surveillance capitalism?

A: It is a new form of capitalism that emerged in the Internet age, where companies provide users with services, usually for free, in exchange for their behavior data (the “surveillance” part), and the right to sell that data to others on a new “behavior futures market” (the “capitalism” part). The canonical example of a surveillance capitalist firm is Google, which provides users with the free service of Internet search (among others). In return, Google gets the information on the users’ behavior (i.e. what they are looking for), and sells it to whoever wants to buy it in various forms (e.g. targeted advertisement placements).

Q: Okay, but that sounds more or less just like regular capitalism, except that the business model is novel and a bit unusual I guess. Is it really so different that you need to invent a new term for it?

A: Yes. The fundamental difference is in the nature of the goods that they sell and how they are produced. Under traditional capitalism, we contribute labor to produce the goods, which typically are then sold back to us. It relied on the division of labor, and we are simultaneously producers and consumers. Surveillance capitalists on the other hand, deal with human behavior. For them, we are nothing more than the raw materials that feed their real engines of production: Big Data, analytics, and the algorithms for behavior prediction. They created a division of learning (or of knowledge), where what we see - the search results, the News Feeds, which I call the “first text” - is separated from what is produced and sold behind the scenes - our behavior prediction data, or the “shadow text”. This “shadow text” is where the true value and power lie, and we have no access to nor control of it.

Q: Still, it doesn’t sound that bad. So what if they are selling our data, we are getting great services for free after all. Why are you especially against this new form of capitalism?

A: This is where the “but we are getting free products in exchange for our data” cliché doesn’t quite get to the insidiousness of what they are doing. You see, it isn’t just about our data, although it may have started with it. With this new style of capitalism, companies are incentivized to find, extract, and compute everything about us, but not for us. Their desire is to generate more and more accurate predictions about our behaviors, such as which ad we are most likely to click on. This road inevitably leads to increasing behavior manipulation and control, because that’s the most effective way to get to high predictability. Nudges, pushes, and cues are just the tip of the iceberg, and a slight preview of what’s to come.

Q: Okay, but isn’t that a bit too far-stretched? This world that you are describing, filled with behavior monitoring and modification mechanisms, sounds more science fiction than soon-to-be reality. Aren’t you being way too imaginative with your extrapolation here?

A: Not really. I am extrapolating from current trends of course, but all this is based on the fundamental incentive structure of the current system. This future is inevitable as long as the structure that created and now supports surveillance capitalism remains unchanged. And from what I can see today, there are very few signs of this structure being changed, unless we do something about it.

Q: Oh, I see. A future of mass scale behavior manipulation does sound quite ominous. But haven’t rulers, states, and governments been doing exactly that, since, well, basically forever? What’s different now with these surveillance capitalists?

A: You are right, governments have been engaged in various forms of behavior manipulation throughout history. Arguably they needed to, in order to rule effectively. But a few things make the situation now different. First, the scale and techniques available today for doing this have expanded dramatically. Second, governments, at least in modern liberal democracies, have some checks and balances, and ultimately have to answer to the people. Private companies on the other hand, do not answer to the people, but to their shareholders only. And with increasing lobbying efforts to fight against regulations, they are much more resistant to external influences. You are correct in recognizing that both governments and private companies strive for more expansive and reliable behavior manipulation though. In fact, the underlying philosophy behind both of those efforts today is the same - the ideology of instrumentarianism.

Q: Instru- what?

A: Instrumentarianism. It is the belief that we should instrument, measure, and track every aspect of our lives, in both the digital and physical realms, for the purposes of generating more accurate predictions of our behavior (i.e. feeding the “shadow text”), and creating more ways to manipulate our behavior (the only way to generate guaranteed predictions). It’s a desire to turn everything into a Skinner box, if you know what they are. Things like augmented reality, Internet of Things (e.g. smart fridges, toasters), and smart roads are just the beginnings of an instrumentarian society. Imagine their natural extensions and end games years into the future.

Q: Technology is awesome though, I can issue voice commands to my lights at home, which would’ve seemed like magic to someone from 50 years ago. Wouldn’t it be amazing if everything was connected? Maybe you are just a Luddite who hates anything new and progressive.

A: I agree technology can be great. After all, technology is neutral, and does not hold any opinions on how it should be used; it’s just a tool, like an inanimate puppet. My issue is with the puppeteers behind the scenes, the people controlling it and using it to suit their own needs, for their own benefits. Right now that is a small group of elites, and their goal is to instrument our world so that they can predict and manipulate us into the guaranteed outcomes that facilitate their financial success. I still want the technology to be created, I just want the division of learning and the asymmetry of power that it causes to be eliminated. Wouldn’t it be amazing if we can have all these smart, connected devices, while we the people remain truly in control?

Q: Hmm, I mean, sure, that sounds good in theory. But I feel like you want to have your cake and eat it too. I think a lot of the technological innovation that we have created is done in the pursuit for dominance by these surveillance capitalist firms, and that without them, the technology simply wouldn’t exist. What if the alternative is actually technological stagnation, instead of this utopia where we get all the shiny new tech without the dominance of Big Tech that you imagine?

A: … [The book does not really address this question]

Q: Alright, I suppose that is a tough question to answer. I understand your concerns for the most part. The path that surveillance capitalism and instrumentarianism is forcing us on to could potentially lead to some quite dystopian future. What do you suggest we do now? Some people are already fighting this, with laws like GDPR and CCPA. Plus you can always go off-the-grid if you really want to.

A: Things like GDPR are a good start, but nowhere near enough. See, the problem with these privacy regulations is that they are still only concerned with the “first text”, the one that is already accessible to us. What can we request from GDPR? Just our search histories, messages and links that we’ve posted. They do not touch on the “shadow text” at all, our actual behavior data. This problem cannot be solved by these limited measures and small scale individual actions of defiance. It is a structural problem, and we must dismantle the fundamental incentive framework of surveillance capitalism to truly solve this problem.

Q: How?

A: First we need to raise awareness (hence why this book exists). People need to know that their behaviors are being tracked, computed, and manipulated for profit. Second, we need to bring some accountability to these big surveillance capitalist firms, like Google and Facebook. I keep coming back to these three questions: who knows, who decides, who decides who decides. Right now it is the surveillance capitalist corporations that know, it is the behavior futures market form that decides, and it is the competitive struggles among the surveillance capitalists that decides who decides. This isn’t good, and we need better answers to these questions.


III. Commentary

1. The Question of Intent

The question of “does surveillance capitalism pose a dangerous threat to the future of society” is embedded within a broader question, of whether the speed of technological innovation in recent years has been too fast. Both of these questions need to be looked at from a balanced perspective, as they each fall within a spectrum, based on tradeoffs. The author definitely skews towards the more conservative end of the spectrum, and favors caution over speed, at least based on recent developments in the Internet age. And it is really speed that we are talking about here, not velocity, as the rapid innovation of technological only serves as the force that pushes, not the agent that orients. In our attempt to go fast, sometimes we can lose the sense of our heading and end up somewhere unintended, pushed by the invisible inertia of the past. This, in my opinion, describes a great deal of how surveillance capitalism and instrumentarianism came into existence.

I don’t think that the founders of Google and Facebook set out to create exactly what they ended up creating. They didn’t start with clear intentions of building what their companies would become, with all the consequences that followed. Instead, I think that they too were unwittingly pushed along by forces which were beyond their control, among them the neo-Liberal capitalism philosophy that permeates the substrate of modern western societies, and the unavoidable public display of human nature (good and bad) from connecting billions of people via the Internet1. By the time that they had realized what was happening, they were already caught up deep inside the emergent, new structure of the “behavior futures market”, that their actions were bound to it.

Although I don’t think the author attributes malice directly from the beginning on these companies’ founders, she does view them as complacent and are deliberately steering things in the “evil” direction. This is where I disagree somewhat with the her take on the issue. The author lacks a certain degree of empathy for these surveillance capitalists, and fails to fully examine things from their perspective. Instead she falls into the trap of over-generalizing and over-simplifying “all rich corporate people” as “profit maximizing agents of the shareholders”. Suppose Mark Zuckerberg reads The Age of Surveillance Capitalism today, is truly moved by its message, and has an epiphany. Is he guaranteed to stop selling ads and dismantle the company immediately? Should he? Probably not. He may feel as Captain America did in Civil War, with the saying of “I know we’re not perfect, but the safest hands are still our own”. He may come to formulate alternative, longer term solutions, that in the near term, look like inaction.

I don’t want to seem too defensive of these billionaires either. I’m not saying that they are not steering things in the “evil” direction, for their own gains. They very well may be (some of them definitely are), but we simply don’t know enough about their perspectives to make accurate judgements. Moreover, the problems that these surveillance capitalist firms generated, and the threat that they now pose to society, emerged without any particular mastermind, and they are hard to solve. The author is correct in pointing out that the rise of these mass scale behavior modification capitalist firms is unprecedented, but does not recognize that even for the insiders - Zuckerberg, Page, etc… - figuring out what to do about it is equally unprecedented.

2. The Question of Will

Where the author’s true brilliance shines is when asking the more fundamental philosophical questions about our current predicament, near the end of the book. In particular, she examines the question of whether humans can be simply reduced to “blobs of meat responding to external stimuli”, and the implications of it. To a behaviorist, such as the famous B. F. Skinner, the answer is unequivocally a yes. The ad absurdum end result of going down such a line of thinking is the creation of a society where its citizens have their behaviors all perfectly monitored, conditioned, and controlled, with no room for error or uncertainty, all working like ants in a colony towards some pre-determined higher goal. It is a hive-like world, optimized for efficiency. This was apparently Skinner’s lifelong dream, and he detailed a version of it in his utopian novel Walden Two.

Like the author, I personally reject this answer. I think that people’s will and subjective experience are important, and cannot simply be dismissed out of view. I think that some of the vocal proponents of the absolutist view that “we have no free will” miss the point. It’s like yelling for people to stop using cash money because pieces of paper have no objective, intrinsic value. The utility of a shared belief does not depend on the objective, factual truth value of the belief itself. It is clear that collectively believing that each of us have free will is useful for social organization.

But is that all free will is good for, a shared belief to help us coordinate? If so, then the hive-like mass behavior controlled society envisioned by behaviorism purists would render that obsolete. In fact, with the trend in new technologies, the instrumentarians of the near future can probably create large scale coordinated societies far more harmonious and productive than any that exist today. This is where the importance of our subjective experience comes in. I would not like to have my subjective experience of my will be taken away to fulfill some higher order objective of the hive. But much to my dismay, I cannot push this argument further beyond this. Because the instrumentarian behaviorists can always claim that they can give me that experience as an illusion via brain stimulations. I don’t want that, yet I don’t have a good rational argument for why (this is the best I can do).

Alas, since I do believe that our will and experience matter, that we are not reducible to “blobs of meat responding to external stimuli”, it makes the potential future of a society driven by mass scale behavior modifications concerning. It is quite a large leap from where we are today to the hive-like world dreamed up by Skinner; we are nowhere near certain to be on that path. However, it is not an unimaginable or unintuitive leap. The scariest thing about this future is that should it become reality, it will most likely happen gradually, unnoticed until it is too late. The now ubiquitous A/B testing for UX optimizations (e.g. to increase click-through rates) arguably crept up on us like this: people generally don’t think of these as a form of operant conditioning experiment, yet in a sense that’s exactly what they are. Vigilance seems to be our best defence.

3. The Question of Democracy

Although the dystopian (or utopian, depending on your position) world above sounds terrifying to me, I do think that it is sufficiently far away for us to be worrying about right now. What is more concerning for the nearer term future, is the decline of democracy. The very end of the book contains some staggering statistics on the fall of the ideal of democracy in people’s minds. According to a 38-nation survey conducted by Pew Research in late 2017, the democratic ideal is no longer a sacred imperative, even for citizens of mature democratic societies such as the US, Sweden, the Netherlands, and Canada. Across the 38 countries, a mere 37% (the median) of respondents are exclusively committed to democracy, with very high percent of people supporting other more authoritarian forms of governance (49% say that “rule by experts” is good, 26% endorse “rule by a strong leader”, and 24% prefer “rule by the military”).

In the long-view history of human civilization, there is a trend towards more equality, and opposition to dysfunctional social hierarchy. Scholars have remarked that the “Leviathan slouches left”2, upon observing this trend. Some have attributed it to technology, at least in part, and there is much truth to it. But I think however big or small a role technology has played in steering the Leviathan left, it is mostly by coincidence. Technology is neutral, it is indifferent of our philosophical leanings. I think the fundamental, deliberate driver of this trend, beneath some other powerful forces (such as economic), is our innate, universal desire for individual freedom and expression.

For a long time, and for the most part, technology has (coincidentally) helped us move in that direction, but that is far from guaranteed. It appears that now, some parts of technological development are starting to exert a force in the opposite direction, of consolidating authoritarian control. This, beyond anything else talked about in the book, is what worries and saddens me the most.

With the new found power of Big Data, machine learning, and the possibility of using them for mass scale behavior manipulation, technological progress is evolving in a direction that favors the centralization of everything: information, power, influence. Advances in surveillance and information processing capabilities are making authoritarian regimes easier and more stable to rule. The sweet nectar of economic efficiency that these massive, unified data processing engines produce is tempting even the most mature democratic societies. Instrumentarianism, with its various benefits and drawbacks, is definitely not confined to the surveillance capitalists. Governments all over the globe, democratic and authoritarian alike, are drawn to its power. Having realized this, in the final sections of the book, the author makes a heartfelt call to rekindle the flames of liberal democratic values. To do so, we must recognize and defend the individual’s agency, the importance of our subjective experience, and our will to will.

4. The Question of the Future

Can we have the best of both worlds? That is, can we somehow reap the benefits of an instrumentarian world, while still remain fully autonomous individuals? Maybe, but I think it’s unlikely. It is ultimately a tradeoff, one that we have been making ever since the early days of hunter-gatherer era: how much individual autonomy are we willing to give up for more efficient group coordination? The author clearly thinks that we are forfeiting too much freedom for knowledge, the knowledge for a more certain, predictable, deterministic future3.

Are we the last free-thinking individuals, before the whole world turns into a coordinated swarm with a life and will of its own, but devoid of individual autonomy? I won’t turn to Orwell here, as the author did at the end of the book. Instead, I think this is where Sartre’s existentialism can perhaps lend a hand. Sartre noted that there is no inherent purpose for humans to exist, no pre-determined reason to do anything. For us humans, existence precedes essence. And so we each create ourselves through what we do. So the focus on efficiency in the current capitalist world is perhaps misguided. Maybe there is no collective goal to achieve via the hive-like coordination that is inherently worth more than the sacrifices of individual freedom that we must give up for it.


  1. Many of the people that founded surveillance capitalist firms (notably Mark Zuckerberg of Facebook) seem to operate under the axiomatic assumption that connecting everyone in the world is intrinsically good. This assumption, which I view as the “Fundamental Axiom of Social Networks”, has proven to be questionable in recent years. When you connect everyone indiscriminately, you expose both the good and bad parts of humanity, and the propagation abilities of the two sides do not seem to be equal. These founders have mostly acknowledged this, and the axiom has since been modified somewhat, to be that connecting everyone in the world has pros and cons, but is still overall net positive for humanity. ↩︎

  2. The “Leviathan” here represents the collective forces of society that seem to take on a life of their own, despite being made up of independent individuals. In a sense, it is the broader, societal equivalent of Adam Smith’s “invisible hand” of macroeconomic forces. ↩︎

  3. There is a distinction here, between knowledge for a more predictable future, and knowledge of a more deterministic future. The former is about gaining more knowledge about today so the future becomes more predictable, and it is passive. The latter is about gaining more control over behavior today which results in more certainty about the future, as it becomes pre-determined, and it is active. The author believes (and I agree) that the path we are on today leads to too much of both of these forms of knowledge, as the pursuit of the first kind naturally leads us to desire the second. ↩︎