James's Blog

Sharing random thoughts, stories and ideas.

The Worst Outcome For AI

Posted: Dec 31, 2022
◷ 4 minute read

In Erik Hoel’s piece Why We Stopped Making Einsteins, he opens with a paragraph that has stuck with me for the last few weeks:

I think the most depressing fact about humanity is that during the 2000s most of the world was handed essentially free access to the entirety of knowledge and that didn’t trigger a golden age.

If the last part of the sentence were true, i.e. a century from now when we look back, the invention of the Internet really did not trigger a golden age, then I think the rest of it would be correct. That would be the most depressing fact about humanity. Unless something worse will come to supplant it. Something like:

The actual most depressing fact about humanity is that during the 2020s we built large language models that can easily pass the Turing test, and that didn’t trigger any singularity.

People have talked a lot about many negative or depressing things on the future of AI, but this is by far the worst outcome by my estimation. Other outcomes that are pretty bad but still not as bad include:

  • The current best techniques of building AI - training neural networks on large quantities of data - will not lead to something that can be reasonably considered to be AGI. We are stuck in a local maxima with no clear way out
  • Large language models (LLMs, like GPT-3) will never be generally intelligent the way people are, no matter how massive we scale them. We waste enormous amounts of resources to produce slightly-better but still generally-dumb chatbots
  • We manage to build an AGI (whether in the form of a supermassive LLM or however), which we cannot control. It outsmarts us and destroys us

Yes, even bringing about the AI apocalypse and our extinction somehow wouldn’t be the most depressing thing about humanity. Because at least we did something! Something that has never been done before, something cataclysmic and truly transformative. Sure, it transformed us all into corpses, but still… The alternative is just so… mid? Cruel even. I mean, we already missed out on a golden age when we put up the entire collective knowledge of humankind for free online. Now we will miss the singularity as foretold by countless sci-fi writers and AI researchers?


So is it actually possible for this worst outcome to become reality? Based on how we have been doing machine learning and how LLMs work today, I’d say it’s not completely impossible.

Malcolm Gladwell talked about the idea of strong-link vs. weak-link problems in an episode of his podcast titled My Little Hundred Million. Strong-link problems are the ones where success is determined by how good your best is. Weak-link problems on the other hand are those where success depends on how good your worst is. Soccer, as Gladwell explains, is a more of a weak-link game while basketball is more of a strong-link game. In soccer, team work is required to score any goals, so winning comes down to how good the worst player on the team is. Basketball is different in that really good players can score on their own, so to improve performance teams often upgrade their top 1-2 players.

Technical and scientific innovation - “Zero to One” in Peter Thiel’s parlance - is a strong-link problem. Industry-transforming companies and groundbreaking paradigms of thinking depend on how good the best entrepreneurs or researchers are. Of course, any success inevitably has significant elements of peer-influence and luck along its path, so the second or third place “failures” are usually equally if not more brilliant. But regardless, the bottleneck here is the level of genius at the top of humanity.

However, our current ML technologies, including the best LLMs, are most adept at solving weak-link problems. They are trained on the massive corpus that is essentially the entire Internet. And as the famous Sturgeon’s law states: 90% of everything is crap. This is probably an overoptimistic take if we are talking about the Internet, which is probably closer to 99% crap. We sort of know this, because otherwise we would already be in a golden age! So when the training data is almost all crap, and the way the algorithm works is by reading all the crap and predict what word will come next after all the previously seen words, the only thing that can come out is more crap. If LLM-generated content proliferate and flood the Internet, the non-crap content could dwindle from 1% to 0.000001%, orders of magnitude harder to find than today. Forget singularities, we may plunge into a new Dark Age of Mediocrity.


But I’m not actually that pessimistic. Once we realize the issue, there are ways to alter our approach to better tackle the strong-link problems with AI. Instead of training the data on everything ever produced by humans, we can train only on the best that stood the test of time (or at least weigh them a lot higher). Instead of using random minimum-wage workers from Mechanical Turk to do labeling and supervised RL training, we can use the top scientists, researchers, and creatives instead. Okay lol this probably won’t happen. But the point is, if we start to work on techniques that are intentionally designed to be better at solving strong-link problems, we can lower the chance of getting the worst outcome for AI. If the extinction of humanity becomes the new worst problem we have to deal with, it’s not so bad is it?