Making or breaking a thinking machine
By Katja Grace, 18 January 2015
Here is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the 'human-level' band must be very narrow. Since all humans are basically identical in design—since you can move from the least intelligent human to the sharpest human with imperceptible changes—then artificial intelligence development will probably cross this band of human capability in a blink. It won't stop on the way to spend years being employable but cognitively limited, or proficient but not promotion material. It will be superhuman before you notice it's nearly human. And from our anthropomorphic viewpoint, from which the hop separating village idiot and Einstein looks like most of the spectrum, this might seem like shockingly sudden progress.
This whole line of reasoning is wrong.
It is true that human brains are very similar. However, this implies very little about the design difficulty of moving from the intelligence of one to the intelligence of the other artificially. The basic problem is that the smartest humans need not be better-designed -- they could be better instantiations of the same design.
What's the difference? Consider an analogy. Suppose you have a yard full of rocket cars. They all look basically the same, but you notice that their peak speeds are very different. Some of the cars can drive at a few hundred miles per hour, while others can barely accelerate above a crawl. You are excited to see this wide range of speeds, because you are a motor enthusiast and have been building your own vehicle. Your car is not quite up to the pace of the slowest cars in your yard yet, but you figure that since all those cars are so similar, once you get it to two miles per hour, it will soon be rocketing along.
If a car is slow because it is a rocket car with a broken fuel tank, that car will be radically simpler to improve than the first car you build that can go over 2 miles per hour. The difference is something like an afternoon of tinkering vs. two centuries. This is intuitively because the broken rocket car already contains almost all of the design effort in making a fast rocket car. It's not being used, but you know it's there and how to use it.
Similarly, if you have a population of humans, and some of them are severely cognitively impaired, you shouldn't get too excited about the prospects for your severely cognitively impaired robot.
Another way to see there must be something wrong with the argument is to note that humans can actually be arbitrarily cognitively impaired. Some of them are even dead. And the brain of a dead person can closely resemble the brain of a live person. Yet while these brains are again very similar in design, AI passed dead-human-level years ago, and this did not suggest that it was about to zip on past live-human-level.
Here is a different way to think about the issue. Recall that we were trying to infer from the range of human intelligence that AI progress would be rapid across that range. However, we can predict that human intelligence has a good probability of varying significantly, using only evolutionary considerations that are orthogonal to the ease of AI development.
In particular, if much of the variation in intelligence is from deleterious mutations, then the distribution of intelligence is more or less set by the equilibrium between selection pressure for intelligence and the appearance of new mutations. Regardless of how hard it was to design improvements to humans, we would always see this spectrum of cognitive capacities, so this spectrum cannot tell us about how hard it is to improve intelligence by design. (Though this would be different if the harm inflicted by a single mutation was likely to be closely related to the difficulty of designing an incrementally more intelligent human).
If we knew more about the sources of the variation in human intelligence, we might be able to draw a stronger conclusion. And if we entertain several possible explanations for the variation in human intelligence, we can still infer something; but the strength of our inference is limited by the prior probability that deleterious mutations on their own can lead to significant variation in intelligence. Without learning more, this probability shouldn't be very low.
In sum, while the brain of an idiot is designed much like that of a genius, this does not imply that designing a genius is about as easy as designing an idiot.
We are still thinking about this, so now is a good time to tell us if you disagree. I even turned on commenting, to make it easier for you. It should work on all of the blog posts now.
Rocket car, photographed by Jon 'ShakataGaNai' Davis.
(Top image: One of the first cars, 1769)