Recently, Nathan Young and I wrote about arguments for AI risk and put them on the AI Impacts wiki. In the process, we ran a casual little survey of the American public regarding how they feel about the arguments, initially (if I recall) just because we were curious whether the arguments we found least compelling would also fail to compel a wide variety of people.
It's embarrassing (as a human) that adding counterarguments so dramatically reverses the sign. One way to compensate in future surveys -- and more importantly on the wiki -- would be to follow each counterargument with a response. Maybe that just flips the sign again, but the more optimistic possibility is that it mostly zeros out the "mere existence of a counterargument" effect, leaving the effect of the argument. And it does so by further enriching people's understanding of the argument, rather than impoverishing it by not showing counterarguments.
It's embarrassing (as a human) that adding counterarguments so dramatically reverses the sign. One way to compensate in future surveys -- and more importantly on the wiki -- would be to follow each counterargument with a response. Maybe that just flips the sign again, but the more optimistic possibility is that it mostly zeros out the "mere existence of a counterargument" effect, leaving the effect of the argument. And it does so by further enriching people's understanding of the argument, rather than impoverishing it by not showing counterarguments.