A TAI which kills all humans, without first ensuring that it is capable of doing everything required for its supply chain, takes a risk of destroying itself. This is a form of potential murder-suicide, rather than the convergent route to gaining long-term power.
Obviously, the amount of time a power-seeking AGI would need Earth to be hospitable to humans, in order to carry out its power-seeking plans, is more than zero days. I think it’s an interesting question just how much more than zero days it is. But I feel like this post isn’t very helpful for answering that question because it makes no effort whatsoever to “think from the AI’s perspective”—i.e. to imagine you're an AI and you're facing these problems and you actually want to solve them, like being in “problem-solving mode”, thinking creatively, etc. I don't think you tried to do that, because if you had, then there wouldn't be quite so many places in this post where you overstate challenges facing the AI by failing to notice obvious partial mitigations.
One starting point for thinking about this: If I were a human with the influence we might realistically hand over to AIs sometime in the next decade or two (for example the CFO of a medium-sized tech company), are there things I could do that would make it easier for an AGI to keep running in absence of humans?
ur-Ai are kindly gently extinguishing us right now; half of humanity is at less than replacement fertility rate. If you think of them as our descendants the prospect hurts less. I do like their strategy: make life so engaging, fun, and meaningful that we stop having kids.
Obviously, the amount of time a power-seeking AGI would need Earth to be hospitable to humans, in order to carry out its power-seeking plans, is more than zero days. I think it’s an interesting question just how much more than zero days it is. But I feel like this post isn’t very helpful for answering that question because it makes no effort whatsoever to “think from the AI’s perspective”—i.e. to imagine you're an AI and you're facing these problems and you actually want to solve them, like being in “problem-solving mode”, thinking creatively, etc. I don't think you tried to do that, because if you had, then there wouldn't be quite so many places in this post where you overstate challenges facing the AI by failing to notice obvious partial mitigations.
One starting point for thinking about this: If I were a human with the influence we might realistically hand over to AIs sometime in the next decade or two (for example the CFO of a medium-sized tech company), are there things I could do that would make it easier for an AGI to keep running in absence of humans?
ur-Ai are kindly gently extinguishing us right now; half of humanity is at less than replacement fertility rate. If you think of them as our descendants the prospect hurts less. I do like their strategy: make life so engaging, fun, and meaningful that we stop having kids.