I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk…
I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk are about whether it’s <20% or >50%, but I can’t tell if that distinction is at play in this article. This is hugely important for how people feel and act about the threat: is it a credible but unlikely threat worth working hard on, or a near certainty that we should soon be taking drastic measures to prevent?
I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk are about whether it’s <20% or >50%, but I can’t tell if that distinction is at play in this article. This is hugely important for how people feel and act about the threat: is it a credible but unlikely threat worth working hard on, or a near certainty that we should soon be taking drastic measures to prevent?