Discover more from AI Impacts blog
AI Impacts Quarterly Newsletter, Apr-Jun 2023
Every quarter, we have a newsletter with updates on what’s happening at AI Impacts, with an emphasis on what we’ve been working on. You can see past newsletters here and subscribe to receive more newsletters and other blogposts here.
During the past quarter, Katja wrote an article in TIME, we created and updated several wiki pages and blog posts, and we began working on several new research projects that are in progress.
We’re running a reader survey, which takes 2-5 minutes to complete. We appreciate your feedback!
If you’d like to donate to AI Impacts, you can do so here. Thank you!
Katja Grace’s TIME article
In May, TIME published Katja’s article “AI Is Not an Arms Race.” People sometimes say that the situation with AI is an arms race that rewards speeding forward to develop AI before anyone else. Katja argues that this is likely not the situation, and that if it is, we should try to get out of it.
References to AI Impacts Research
The 2022 Expert Survey on Progress in AI was referenced in an article in The Economist, a New York Times op-ed by Yuval Noah Harari, a Politico op-ed that argues for a “Manhattan Project” for AI Safety, and a report from Epoch AI’s Matthew Barnett and Tamay Besiroglu about a method for forecasting the performance of AI models.
We also maintain an ongoing list of citations of AI Impacts work that we know of.
Research and writing highlights
Views on AI risks
Rick compiled a list of quotes from prominent AI researchers and leaders about their views on AI risks.
Jeffrey compiled a list of quantitative estimates about the likelihood of AI risks from people working in AI safety.
Zach compiled a list of surveys that ask AI experts or AI safety/governance experts for their views on AI risks.
The supply chain of AI development
Harlan wrote a page outlining some factors that affect the price of AI hardware
Jeffrey wrote a blogpost arguing that slowing AI is easiest if AI companies are horizontally integrated, but not vertically integrated.
Ideas for internal and public policy about AI
Zach also compiled a list of statements that AI labs have made about public policy.
AI timeline predictions
Zach and Harlan updated a list of AI Timeline surveys with data from recent surveys.
Jeffrey wrote a blogpost arguing that people should give greater consideration to visions of the future that don’t involve AGI.
Jeffrey wrote a blogpost arguing that AI systems are currently too reliant on human-supported infrastructure to easily cause human extinction without putting the AI system at risk
Harlan, Jeffrey, and Rick submitted responses to the National Telecommunication and Information Administration’s AI accountability policy request for comment and the Office of Science and Technology Policy’s request for information.
Katja and Zach are preparing to publish a report about the 2022 Expert Survey on Progress in AI, with further analysis of the results and details about the methodology.
Jeffrey is working on a case study of Institutional Review Boards in medical research.
Harlan is working on a case study of voluntary environmental standards.
Zach is working on a project that explores ways of evaluating AI labs for the safety of their practices in developing and deploying AI.
We are still seeking funding for 2023 and 2024. If you want to talk to us about why we should be funded or hear more details about our plans, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.
If you'd like to donate to AI Impacts, you can do so here. (And we thank you!)
We are running a reader survey in the hopes of getting useful feedback about our work. If you’re reading this and would like to spend 2-5 minutes filling out the reader survey, you can find it here. Thank you!
Thanks for reading AI Impacts blog! Subscribe for free to receive new posts and support my work.