When my sons learned about exponents in elementary school, they came home chattering about googol – which is 10100, or ‘1’ followed by 100 zeros. This is an impressively large number: physicists estimate that the entire universe only contains enough matter to make 1093 hydrogen atoms. However, as with their nine-year old peer that invented the terminology, the next step was even more exciting: the googolplex, or ‘1’ followed by a googol of zeros. When challenged to exhibit the number, the originator adopted a more flexible definition, purportedly:
‘1’ followed by writing zeroes until you get tired
One of the most attractive aspects of science and engineering research is just such child-like “what comes next?” The tendency is to think in exponential terms, although not always in powers of ten. Moore’s Law in semiconductors held that microprocessors should double in power every 18 months, and inspired a generation of chip designers, so that now our cell phones contain more processing power than the super-computers of the ’70s. Digital storage systems have improved even faster, allowing us to indulge the narcissism of the “selfie” culture: every day we store 50 times as much data in the cloud than is required to encode the entire Library of Congress.
The problem is when what appears to be the next obvious step in physics and engineering becomes decoupled from social need. We spend billions of dollars each year launching satellites to probe the structure of the cosmos, and even further billions providing power and material to the facilities that probe the matter that surrounds us. “Super-high” skyscrapers are pushing towards a kilometer in height, led by Saudi Arabia and Dubai, who appear to be engaged in a penis-envy contest that seems tolerable mostly because it side-steps the threat of mass extinction posed by the last such contest (the nuclear arms race between the US and the USSR).
In the case of software development, the “what next” syndrome is particularly seductive. It used to be that we’d have to go to a store and buy a box to get a better operating system, word processor or tax preparation package. Now we purchase them online and download them over the internet. This means that a software developer with a “better idea” can push it out to the world almost immediately. Sometimes the rest of us wish that wasn’t so – we call those experiences “zero day exploits,” and they cost us collectively tens of billions of dollars to defend against and clean up afterwards.
But most of the time, we benefit from that capability, particularly as artificial intelligence improves. We already have medical diagnostic software that does better than most physicians at matching symptoms to diseases. Existing collision avoidance algorithms allow us to sleep behind the wheel as long as a lane change isn’t required, and self-driving cars are only a few years away from wide-spread use. Credit card transaction monitoring protects us from fraud. These all function as well as they do because the rules described in software don’t disappear when the machine turns off. While the understanding encoded in the neural pathways of a Nobel Laureate vanishes upon death, software algorithms are explicitly described in a form that can be analyzed long after the originator retires. The lineage of successors can therefore improve endlessly upon the original.
The combination of robust, long-lived memory and algorithms means that the software industry believes that it is curating the evolution of a new form of mind. Those developing that mind seek to extend its capabilities in both directions: recording everything that happens, and defining rules to control outcomes in every possible situation. Their ambition, in effect, is to create God.
Contemplating this future, I had a colleague at work effuse that he “looked forward” to Big Brother, believing that in an era in which everything was known and available online for discovery, people would think twice about doing wrong.
In response, I suggested that many religions teach us that such a being already exists, but has the wisdom to understand that confronting us with our failings is not always the most productive course. In part, that is because error is intrinsic to learning. While love binds us together in an intimacy that allows us to solve problems together that we could never solve alone, we’re still going to hurt each other in the process. Big Brother can’t decide who we should love, and neither can God: each of us is unique, and part of life’s great adventure is finding that place in which we create greatest value for the community we nurture.
Furthermore, Big Brother is still a set of algorithms under centralized control. In George Orwell’s 1984, the elite used its control of those algorithms to subjugate the world. By contrast, the mind of God is a spiritual democracy: we chose and are accepted only by reciprocal gestalts.
Finally, Big Brother can never empathize with us. It can monitor our environment and our actions, it can even monitor our physiological state. But it cannot know whether our response is appropriate to the circumstances. It cannot be that still, quiet voice in our ear when our passion threatens to run amok. Big Brother cannot help us to overcome our weakness with strength – it can only punish us when we fail.
So, you in the artificial intelligence community, if you believe that you can create a substitute for God from digital technology, you should recognize that the challenge has subtleties that go beyond omniscience and perfect judgment. It includes the opportunity to engage in loving co-creation, and so to enter into possibilities that we can’t imagine, and therefore that are guaranteed to break any system of fixed rules. Your machine, if required to serve in that role, will unavoidable manifest a “Google-plex,” short for a “Google complex.”