What do ML researchers think you are wrong about?
By Katja Grace, 25 September 2017
So, maybe you are concerned about AI risk. And maybe you are concerned that many people making AI are not concerned enough about it. Or not concerned about the right things. But if so, do you know why they disagree with you?
We didn't, exactly. So we asked the machine learning (ML) researchers in our survey. Our questions were:
To what extent do you think people’s concerns about future risks from AI are due to misunderstandings of AI research?
What do you think are the most important misunderstandings, if there are any?
The first question was multiple choice on a five point scale, while the second was more of a free-form, compose-your-own-succinct-summary-critique-of-a-diverse-constellation-of-views type thing. Nonetheless, more than half of the people who did the first also kindly took a stab at the second. Some of their explanations were pretty long. Some not. Here is my attempt to cluster and paraphrase them:
Number of respondents giving each response, out of 74.
Our question might have been a bit broad. 'People's concerns about AI risk' includes both Stuart Russell's concerns about systems optimizing n-variable functions based on fewer than n variables, and reporters' concerns about killer sex robots. Which at a minimum should probably be suspected of resting on different errors. [Edited for clarity Oct 15 '17]
So are we being accused of any misunderstandings, or are they all meant for the 'put pictures of Terminator on everything' crowd?
The comments about unemployment and surprising events, and some of the ones about AI ruling over or fighting us seem likely to be directed at people like me. On the other hand, they are also all about social consequences, and none of these issues seem to be considered resolved by the relevant social scientists. So I am not too worried if I find myself in disagreement with some AI researchers there.
I am more interested if AI researchers complain that I am mistaken about AI. And I think they probably are here, at least a bit.
My sense from reading over all these responses is that the first three categories listed in the figure represent basically the same view, and that people talk about it at different levels of generality. I'd put them together like this:
The state of the art right now looks great in the few examples you see, but those are actually a large fraction of the things that it can do, and it often can't even do very slight variations on those things. The problems AI can currently deal with all have to be very well specified. Getting from here to AI that can just wander out into the world and even live a successful life as a rat seems wildly ambitious. We don't know how to make general AI at all. So we are really unimaginably far from human-level AI, because it would have to be general.
But this is a guess on my part, and I am curious to hear whether any AI researchers reading have a better sense of what views are like.
Whether these first three categories are all the same view or not, they do sound plausibly directed at people like me. And if ML researchers want to disagree with me about the state of the art in AI or how easy it is to extend it or improve upon it, it would be truly shocking if I were in the right. So I tentatively conclude that we are probably further away from general AI than I might have thought.
On the other hand, I wouldn't be surprised if the respondents were misdiagnosing the disagreement here. My impression is that AI researchers (among others) often take for granted that you shouldn't worry about things decades before they are likely to happen. So when they see people worried about AI risk, they naturally suppose that those people anticipate dangerous AI much sooner than they really do. My weak impression is that this kind of misunderstanding happens often.
By the way, the respondents did mostly think concerns are based largely on misunderstandings (which is not to imply that they aren't concerned):
Number of respondents giving each response, out of 118.
(Results taken from our survey page. More new results are also up there.)