Thank you for the pretty comprehensive list. Though at least half of the arguments are extremely poorly defined my main concern is that a taxonomy is valid only if it is usable. Speaking of the existential risk I would expect at least a world on how the defined types differ by mitigation tactics.
I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk are about whether it’s <20% or >50%, but I can’t tell if that distinction is at play in this article. This is hugely important for how people feel and act about the threat: is it a credible but unlikely threat worth working hard on, or a near certainty that we should soon be taking drastic measures to prevent?
This is very well put together, and differentiates some subtle effects from each other. Catastrophic tools might have mentioned synthetic biology more explicitly. I also think people would find the Agents argument more compelling with a better social dilemma explanation (Racing to the Precipice is good) and a different example, maybe a human environmental collapse. I don't really think any of these risks are existential, but every one could bring harms of different kinds, maybe all at once.
Looks like polls on crossposts don't work on Substack, so if you are reading this elsewhere and can't vote, go to the AI Impacts Blog version: https://blog.aiimpacts.org/p/ten-arguments-that-ai-is-an-existential/
Thank you for the pretty comprehensive list. Though at least half of the arguments are extremely poorly defined my main concern is that a taxonomy is valid only if it is usable. Speaking of the existential risk I would expect at least a world on how the defined types differ by mitigation tactics.
In a time where AI is advancing at unprecedented speed, a few voices are quietly choosing a harder path:
One that puts safety before scale. Wisdom before hype. Humanity before power.
There’s a new initiative called Safe Superintelligence Inc. — a lab built around one single goal:
To develop AGI that is safe by design, not just by hope or regulation.
If you're someone with world-class technical skills and the ethical depth to match —
this is your call to action.
We don’t need more AI.
We need better, safer, more compassionate AI.
Spread the word. Support the mission
I'm unable to vote on the polls.
Maybe they close after a certain time?
This was useful, thanks!
> This argument also appears to apply to human groups such as corporations
https://arbital.greaterwrong.com/p/corps_vs_si/
I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk are about whether it’s <20% or >50%, but I can’t tell if that distinction is at play in this article. This is hugely important for how people feel and act about the threat: is it a credible but unlikely threat worth working hard on, or a near certainty that we should soon be taking drastic measures to prevent?
This is very well put together, and differentiates some subtle effects from each other. Catastrophic tools might have mentioned synthetic biology more explicitly. I also think people would find the Agents argument more compelling with a better social dilemma explanation (Racing to the Precipice is good) and a different example, maybe a human environmental collapse. I don't really think any of these risks are existential, but every one could bring harms of different kinds, maybe all at once.