6 Comments
author

Looks like polls on crossposts don't work on Substack, so if you are reading this elsewhere and can't vote, go to the AI Impacts Blog version: https://blog.aiimpacts.org/p/ten-arguments-that-ai-is-an-existential/

Expand full comment

Thank you for the pretty comprehensive list. Though at least half of the arguments are extremely poorly defined my main concern is that a taxonomy is valid only if it is usable. Speaking of the existential risk I would expect at least a world on how the defined types differ by mitigation tactics.

Expand full comment

This was useful, thanks!

Expand full comment

> This argument also appears to apply to human groups such as corporations

https://arbital.greaterwrong.com/p/corps_vs_si/

Expand full comment

I found this really useful, but felt its main weakness is that it doesn’t differentiate between “existential risk is possible” vs “is likely”. I answered the latter and often said no. But the argument from expert opinion mentions 5% risk which clearly isn’t trying to argue that it’s likely. Most arguments I hear about AI existential risk are about whether it’s <20% or >50%, but I can’t tell if that distinction is at play in this article. This is hugely important for how people feel and act about the threat: is it a credible but unlikely threat worth working hard on, or a near certainty that we should soon be taking drastic measures to prevent?

Expand full comment

This is very well put together, and differentiates some subtle effects from each other. Catastrophic tools might have mentioned synthetic biology more explicitly. I also think people would find the Agents argument more compelling with a better social dilemma explanation (Racing to the Precipice is good) and a different example, maybe a human environmental collapse. I don't really think any of these risks are existential, but every one could bring harms of different kinds, maybe all at once.

Expand full comment
Error