Metasurvey: predict the predictors
By Katja Grace, 12 May 2016
As I mentioned earlier, we've been making a survey for AI researchers.
The survey asks when AI will be able to do things like build a lego kit according to the instructions, be a surgeon, or radically accelerate global technological development. It also asks about things like intelligence explosions, safety research, how hardware hastens AI progress, and what kinds of disagreement AI researchers have with each other about timelines.
We wanted to tell you more about the project before actually surveying people, to make criticism more fruitful. However it turned out that we wanted to start sending out the survey soon even more than that, so we did. We did get an abundance of private feedback, including from readers of this blog, for which we are grateful.
We have some responses so far, and still have about a thousand people to ask. Before anyone (else) sees the results though, I thought it might be amusing to guess what they look like. That way, you can know whether you should be surprised when you see the results, and we can know more about whether running surveys like this might actually change anyone's beliefs about anything.
So we made a second copy of the survey to act as metasurvey, in which you can informally register your predictions.
If you want to play, here is how it works:
Go to the survey here.
Instead of answering the questions as they are posed, guess what the median answer given by our respondents is for each question.
If you want to guess something other than the median given by our other respondents, do so, then write what you are predicting in the box for comments at the end. (e.g. maybe you want to predict the mode, or the interquartile range, or what the subset of respondents who are actually AI researchers say).
If you want your predictions to be identifiable to you, give us your name and email at the end. This will for instance let us alert you if we notice that you are surprisingly excellent at predicting. We won't make names or emails public.
At the end, you should be redirected to a printout of your answers, which you can save somewhere if you want to be able to demonstrate later how right you were about stuff. There is a tiny pdf export button in the top right corner.
You will only get a random subset of questions to predict, because that's how the survey works. If you want to make more predictions, the printout has all of the questions.
We might publish the data or summaries of it, other than names and email addresses, in what we think is an unidentifiable form.
Some facts about the respondents, to help predict them:
They are NIPS 2015/ICML 2015 authors (so a decent fraction are not AI researchers)
There are about 1600 of them, before we exclude people who don't have real email addresses etc.
John Salvatier points out to me that the Philpapers survey did something like this (I think more formally). It appears to have been interesting—they find that 'philosophers have substantially inaccurate sociological beliefs about the views of their peers', and that 'In four cases [of thirty], the community gets the leading view wrong...In three cases, the community predicts a fairly close result when in fact a large majority supports the leading view'. If it turned out that people thinking about the future of AI were that wrong about the AI community's views, I think that would be good to know about.
Featured image: By DeFacto (Own work) [CC BY-SA 4.0], via Wikimedia Commons