James's Blog

Sharing random thoughts, stories and ideas.

Post-Truth World

Posted: Jan 20, 2019
◷ 4 minute read

The term “post-truth” has been popularized in recent years, due to the circumstances that arose from our social and political systems. It commonly refers to the increase in influence of emotion and personal biased beliefs, over facts, in shaping people’s opinions and behaviors. Many people have written about this shift, offering many explanations and solutions, which I will not focus on here. Instead, I want to talk about another kind of “post-truth”, which can be seen as a potential next step in the evolution of the current post-truth world. And that is: when our ability to arbitrarily construct digital evidence becomes so good that it’s nearly impossible to distinguish between what is real and what isn’t.

Generating fake “evidence” is nothing new. We’ve been doing it for a long time, from the ancient ability to forge signatures to the more modern ability to Photoshop images. Movies and television have been capitalizing on the advancements in this field for years, with many of today’s biggest blockbusters shooting mostly in front of a green screen, and the rest of the scenes filled in via CGI in post-production. But with the increase in computing power, coupled with the rise of many effective machine learning techniques in the last decade or so, we are now able to create more types of digital information in much better qualities.

To put this more concretely, companies can now generate arbitrary speech sounds based on a short voice recording. With a 1-2 minute recording of someone speaking or reading, we can then generate sounds of their voice saying anything we want. Other researchers are now able to reconstruct the facial expressions and mouth movements of someone speaking based on a short length of recorded video. Combining these two technologies, we can effectively create fake video recordings of anyone saying anything, that are almost indistinguishable from real ones. All we need is some brief audio and video data from the person, which is easy to collect, especially for public figures.

This is very different from the current version of “post-truth” that people commonly talk about. The situation right now, for the most part, seems to be that even though facts are technically accessible to people, many are simply not looking at (or ignoring) them for one reason or another (e.g. personal biases, filter bubbles). But this new, evolved version of “post-truth” means that everyone, including the people who fact-check everything rigorously today, will have a hard time telling apart what is true and what is not. Facts will no longer be technically accessible, even for people who care to look for them.

If this turns into reality, we seem to be pretty screwed. When these kinds of digital forgery technology become commoditized, anyone can create real-looking video footage of anyone saying/doing anything, and spread it with the power of social media. How can we believe anything in a post-truth world like this? We could all be misled into various ill-formed ideologies/narratives, based on different groups’ generated false evidence, creating far more polarization and division than today. Or even worse, society could devolve into a cesspool of distrust on the brink of collapse, with drastically decreased economic and political efficiency.

But the future may not be as bleak as it seems. Concluding that a problem which does not have good solutions today will be our doom is a bit narrow-minded. There are plenty of examples in history where things once thought to be apocalyptic disappear as society progressed over time. The end of civilization due to agricultural output not able to keep up with exponential population growth is a common one throughout history, which completely failed to materialize (so far at least). A more relevant example is the rise of Photoshop, which had the potential to destroy our trust in the authenticity of images. But we seem to have adjusted well to the post-Photoshop world. Forgery detection techniques got better. People became more skeptical of images in general. And by relying on other methods of authenticity validation (e.g. interpersonal trust, multi-party consensus systems), we continue to be able to use images as evidence for facts. Perhaps the same could happen with all the new types of false “facts” that we can begin to generate now.

With that said, I still wonder if we could eventually hit a breaking point where we can no longer cope with the post-truth world. Many of the countermeasures we use today (including for forged images) still rely on our ultimate ability to validate authenticity, even if only very few people can do it. News organizations do not create and use doctored images today, even if they can do it well enough to fool most people, because experts can still detect tempering, and expose them for it. But once we pass that point, where even experts cannot tell the difference, it could become a much harder problem. Perhaps it will evolve into a constant game of cat and mouse between fake video generator AIs and authenticity detector AIs.