Recently, Nathan Young and I wrote about arguments for AI risk and put them on the AI Impacts wiki. In the process, we ran a casual little survey of the American public regarding how they feel about the arguments, initially (if I recall) just because we were curious whether the arguments we found least compelling would also fail to compel a wide variety of people.
It's embarrassing (as a human) that adding counterarguments so dramatically reverses the sign. One way to compensate in future surveys -- and more importantly on the wiki -- would be to follow each counterargument with a response. Maybe that just flips the sign again, but the more optimistic possibility is that it mostly zeros out the "mere existence of a counterargument" effect, leaving the effect of the argument. And it does so by further enriching people's understanding of the argument, rather than impoverishing it by not showing counterarguments.
The expert opinion argument said that experts put "substantial credence (e.g. 5%) on human extinction."
I'm guessing the downward shift comes from (A) some bias that makes people say a higher p(doom) than their true credence (I think the world would look quite different if the median person on the street really expected a 1/9 chance of extinction), followed by (B) shifting that number in the direction of the example expert number.
Positly, selected for being in America. There was another choice in there of what source of participants to use, which was fairly random from my perspective (seemed to affect whether e.g. we could give them extra rewards afterwards). I don't think we selected on anything else, but I can't presently log in to check due to password manager failure.
I see, thanks. I somehow got it into my head that maybe y'all had gone out onto the street and polled people in person. I don't know why I thought this. That probably would have been a bad way to collect the data, but it was an entertaining image.
It's embarrassing (as a human) that adding counterarguments so dramatically reverses the sign. One way to compensate in future surveys -- and more importantly on the wiki -- would be to follow each counterargument with a response. Maybe that just flips the sign again, but the more optimistic possibility is that it mostly zeros out the "mere existence of a counterargument" effect, leaving the effect of the argument. And it does so by further enriching people's understanding of the argument, rather than impoverishing it by not showing counterarguments.
The expert opinion argument said that experts put "substantial credence (e.g. 5%) on human extinction."
I'm guessing the downward shift comes from (A) some bias that makes people say a higher p(doom) than their true credence (I think the world would look quite different if the median person on the street really expected a 1/9 chance of extinction), followed by (B) shifting that number in the direction of the example expert number.
Where did you get your participants?
Positly, selected for being in America. There was another choice in there of what source of participants to use, which was fairly random from my perspective (seemed to affect whether e.g. we could give them extra rewards afterwards). I don't think we selected on anything else, but I can't presently log in to check due to password manager failure.
I see, thanks. I somehow got it into my head that maybe y'all had gone out onto the street and polled people in person. I don't know why I thought this. That probably would have been a bad way to collect the data, but it was an entertaining image.
I think at pseudorandom via Positly, but @katja is probably the one to answer that.