The number of years until the creation of powerful AI is a major input to our thinking about risk from AI and which approaches are most promising for mitigating that risk. While there are downsides to transformative AI arriving many years from now, rather than few years from now, most people seem to agree that it is safer for AI to arrive in 2060 than in 2030. Given this, there is a lot of discussion about what we can do to increase the number of years until we see powerful systems that may pose a risk of catastrophic, perhaps permanent harm to humanity. While many of these proposals have their merits, none of them can ensure that AI will arrive later than 2030, much less 2060.
A policy guaranteed to increase AI timelines
A policy guaranteed to increase AI timelines
The number of years until the creation of powerful AI is a major input to our thinking about risk from AI and which approaches are most promising for mitigating that risk. While there are downsides to transformative AI arriving many years from now, rather than few years from now, most people seem to agree that it is safer for AI to arrive in 2060 than in 2030. Given this, there is a lot of discussion about what we can do to increase the number of years until we see powerful systems that may pose a risk of catastrophic, perhaps permanent harm to humanity. While many of these proposals have their merits, none of them can ensure that AI will arrive later than 2030, much less 2060.