The tyranny of the god scenario
By Michael Wulfsohn, 6 April 2018
I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine or counter. It would likely arrive within a few decades, and would wield complete power over humanity. Our species’ most important challenge would be to solve the value alignment problem. The impending singularity would lead either to our salvation, our extinction, or worse.
Intellectually, I knew that it was not certain that this “god scenario” would come to pass. If asked, I would even have assigned it a relatively low probability, certainly much less than 50%. Nevertheless, it dominated my thinking. Other possibilities felt much less real: that humans might achieve direct control over their superintelligent invention, that reaching human-level intelligence might take hundreds of years, that there might be a slow progression from human-level intelligence to superintelligence, and many others. I paid lip service to these alternatives, but I didn’t want them to be valid, and I didn’t think about them much. My mind would always drift back to the god scenario.
I don't know how likely the god scenario really is. With currently available information, nobody can know for sure. But whether or not it's likely, the idea definitely has powerful intuitive appeal. For example, it led me to change my beliefs about the world more quickly and radically than I ever had before. I doubt that I'm the only one.
Why did I find the god scenario so captivating? I like science fiction, and the idea of an intelligence explosion certainly has science-fictional appeal. I was able to relate to the scenario easily, and perhaps better think through the implications. But the transition from science fiction to reality in my mind wasn’t immediate. I remember repeatedly thinking “nahhh, surely this can’t be right!” My mind was trying to put the scenario in its science-fictional place. But each time the thought occurred, I remember being surprised at the scenario’s plausibility, and at my inability to rule out any of its key components.
I also tend to place high value on intelligence itself. I don’t mean that I’ve assessed various qualities against some measure of value and concluded that intelligence ranks highly. I mean it in a personal-values sense. For example, the level of intelligence I have is a big factor in my level of self-esteem. This is probably more emotional than logical.
This emotional effect was an important part of the god scenario’s impact on me. At first, it terrified me. I felt like my whole view of the world had been upset, and almost everything people do day to day seemed to no longer matter. I would see a funny video of a dog barking at its reflection, and instead of enjoying it, I’d notice the grim analogy of the intellectual powerlessness humanity might one day experience. But apart from the fear, I was also tremendously excited by the thought of something so sublimely intelligent. Having not previously thought much about the limits of intelligence itself, the concept was both consuming and eye-opening, and the possibilities were inspiring. The notion of a superintelligent being appealed to me similarly to the way Superman’s abilities have enthralled audiences.
Other factors included that I was influenced by highly engaging prose, since I first learned about superintelligence by reading this excellent waitbutwhy.com blog post. Another was my professional background; I was accustomed to worrying about improbable but significant threats, and to arguments based on expected value. The concern of prominent people—Bill Gates, Elon Musk, and Stephen Hawking—helped. Also, I get a lot of satisfaction from working on whatever I think is humanity’s most important problem, so I really couldn’t ignore the idea.
But there were also countervailing effects in my mind, leading away from the god scenario. The strongest was the outlandishness of it all. I had always been dismissive of ideas that seem like doomsday theories, so I wasn’t automatically comfortable giving the god scenario credence in my mind. I was hesitant to introduce the idea to people who I thought might draw negative conclusions about my judgement.
I still believe the god scenario is a real possibility. We should assiduously prepare for it and proceed with caution. However, I believe I have gradually escaped its intuitive capture. I can now consider other possibilities without my mind constantly drifting back to the god scenario.
I believe a major factor behind my shift in mindset was my research interest in analyzing AI safety as a global public good. Such research led me to think concretely about other scenarios, which increased their prominence in my mind. Relatedly, I began to think I might be better equipped to contribute to outcomes in those other scenarios. This led me to want to believe that the other scenarios were more likely, a desire compounded by the danger of the god scenario. My personal desires may or may not have influenced my objective opinion of the probabilities. But they definitely helped counteract the emotional and intuitive appeal of the god scenario.
Exposure to mainstream views on the subject also moderated my thinking. In one instance, reading an Economist special report on artificial intelligence helped counteract the effects I’ve described, despite that I actually disagreed with most of their arguments against the importance of existential risk from AI.
Exposure to work done by the Effective Altruism community on different future possibilities also helped, as did my discussions with Katja Grace, Robin Hanson, and others during my work for AI Impacts. The exposure and discussions increased my knowledge and the sophistication of my views such that I could better imagine the range of AI scenarios. Similarly, listening to Elon Musk’s views of the importance of developing brain-computer interfaces, and seeing OpenAI pursue goals that may not squarely confront the god scenario, also helped. They gave me a choice: decide without further ado that Elon Musk and OpenAI are misguided, or think more carefully about other potential scenarios.
Relevance to the cause of AI safety
I believe the AI safety community probably includes many people who experience the god scenario’s strong intuitive appeal, or have previously experienced it. This tendency may be having some effects on the field.
Starting with the obvious, such a systemic effect could cause pervasive errors in decision-making. However, I want to make clear that I have no basis to conclude that it has done so among the Effective Altruism community. For me, the influence of the god scenario was subtle, and driven by its emotional facet. I could override it when asked for a rational assessment of probabilities. But its influence was pervasive, affecting the thoughts to which my mind would gravitate, the topics on which I would tend to generate ideas, and what I would feel like doing with my time. It shaped my thought processes when I wasn’t looking.
Preoccupation with the god scenario may also entail a public relations risk. Since the god scenario’s strong appeal is not universal, it may polarize public opinion, as it can seem bizarre or off-putting to many. At worst, a rift may develop between the AI safety community and the rest of society. This matters. For example, policymakers throughout the world have the ability to promote the cause of AI safety through funding and regulation. Their involvement is probably an essential component of efforts to prevent an AI arms race through international coordination. But it is easier for them to support a cause that resonates with the public.
Conversely, the enthusiasm created by the intuitive appeal of the god scenario can be quite positive, since it attracts attention to related issues in AI safety and existential risk. For example, others’ enthusiasm and work in these areas led me to get involved.
I hope readers will share their own experience of the intuitive appeal of the god scenario, or lack thereof, in the comments. A few more data points and insights might help to shed light.