By Asya Bergal, 25 March 2020 I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if: It is considerably more difficult to build safe AGI than it is to build unsafe AGI.
AGI in a vulnerable world
AGI in a vulnerable world
AGI in a vulnerable world
By Asya Bergal, 25 March 2020 I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if: It is considerably more difficult to build safe AGI than it is to build unsafe AGI.