Share this comment
Just wanted to mention that if anyone liked my submissions (3rd prize, An Overview of “Obvious” Approaches to Training Wise AI Advisors - aiimpacts.org/an-overvi…, Some Preliminary Notes on the Promise of a Wisdom Explosion - aiimpacts.org/some-prel…),
I'll be running project related to this work for AI Safety Camp (Description here: docs.google.com/documen…).
© 2025 AI Impacts
Substack is the home for great culture
Just wanted to mention that if anyone liked my submissions (3rd prize, An Overview of “Obvious” Approaches to Training Wise AI Advisors - https://aiimpacts.org/an-overview-of-obvious-approaches-to-training-wise-ai-advisors/, Some Preliminary Notes on the Promise of a Wisdom Explosion - https://aiimpacts.org/some-preliminary-notes-on-the-promise-of-a-wisdom-explosion/),
I'll be running project related to this work for AI Safety Camp (Description here: https://docs.google.com/document/d/1kJn9F_G9ezeoOrjhc4x06iv7xMuygqWWkty-ztBL6W0/edit?tab=t.0#heading=h.b8lhi4yltarg).
Just wanted to add one acknowledgement: Thanks to Anton Kalabukhov who pointed out that my previous analysis of the principled approach didn't account for the possibility of building a seed AI.