Fear Not Malevolent AI Robots
I’ve recently become hooked on Michael Solona’s new podcast series, “Anatomy of Next: Utopia,”a fascinating analysis of our most advanced and rapidly developing technologies that attempts to debunk the tech-gone-wrong dystopian nightmares that dominate much of the public imagination.
You all know how the story goes. Man invents an artificial intelligence (AI) smarter than himself. AI goes on to invent even smarter AIs. Smarter AIs exponentially become godlike and turn on mankind and either reduce us to slaves or drive us to extinction.
Inflation has been a big focus of Wall Street in recent months, and it won't go away any time soon. But where do we stand with inflation? Has it peaked, or will it continue higher? Q2 2021 hedge fund letters, conferences and more Nic Johnson of PIMCO, Catherine LeGraw of GMO, and Evan Rudy of Read More
Implausible and Impossible
It ain’t gonna happen. The reason is something called the distributed knowledge problem. Although Solona hasn’t yet addressed it in his series, the Nobel Prize-winning economist F. A. Hayek explained it years ago in his last book, The Fatal Conceit. In short, it is a practical impossibility for anyone to capture and assess the sum total of information generated from countless millions of economic decisions made by independent agents distributed around the world, each seeking to maximize his or her own well-being.
The root of the error, made by all central planners, is the belief that the world is some sort of Newtonian clockwork mechanism. If science could only deduce its operating principles, enlightened rulers could guide humanity toward utopia, or something approaching it. More sophisticated versions of this view add the proviso that effective planning requires the collection of enough data to inform central planners where to apply their wisdom, and adapt their plan as conditions evolve.
Well, aha! What if that evolution spun out of human control, as superintelligent AIs deduce the operating principles of the world and Big Data tell them everything they need to know to control it? If they turned against us, how could we humans stand in the way?
Omniscience is Unachievable
There are two problems with this scenario.
First, the world is not a Newtonian clockwork mechanism. Spend five minutes reading about chaos theory , which demonstrates that even the simplest physical system can sometimes behave in inherently unpredictable ways due to the tiniest perturbation in initial conditions. When you’re done, try to grasp the essential randomness that lies at the core of our physical universe as worked out a century ago by the physicists who developed quantum mechanics. Despite Albert Einstein’s famous lament, God does in fact play dice in that the randomness sewn into the fabric of the universe manifests in ways that are not only unknown but fundamentally and forever unknowable.
So, even if a superintelligent AI were omniscient, it couldn’t figure out what the weather will be the day after tomorrow, much less how to dominate a race of sentient beings who will be really pissed off if the robots were to get out of line.
Second, omniscience is not even achievable, as Hayek well showed. And thankfully, it requires a lot less math to understand the distributed knowledge problem than to fathom chaos theory or quantum mechanics. Just read Leonard Read’s classic short essay, I, Pencil, an allegory of what it takes to successfully manufacture something as seemingly simple as a little lead pencil, built thanks to the unplanned cooperation of millions of people.
How Mankind Prospered
Mankind has found only one way to truly prosper in a world where the future is unpredictable and your neighbor’s thoughts are unknowable, and that is through the voluntary cooperation of free people exchanging value for value in free markets.
This is how man conquered nature. It is how we harnessed the knowledge and efforts of billions of people to climb out of poverty in an exponential explosion that shows no sign of stopping. It wouldn’t take long for superintelligent AIs to figure out that humans make very poor slaves, and even worse mortal enemies.
When it comes to tyrannically running the world, there’s no reason why truly superintelligent AIs wouldn’t come to the same conclusion as F. A. Hayek—or for that matter, the computer that controlled the U.S. nuclear arsenal in the iconic Cold War movie War Games. The only winning move is not to play. I pity any AI who thinks otherwise.