Steven Adler
@sjgadler
Ex-OpenAI safety researcher (danger evals & AGI readiness), http://stevenadler.substack.com. Likes maximizing benefits and minimizing risks of AI
Some personal news: Since leaving OpenAI, I’ve been writing publicly about how to build an AI future that’s actually exciting: avoiding the worst risks and building an actually good future. I’m excited to continue this work as a fellow of the Roots of Progress Institute.

🆕 blog post! My job involves funding projects aimed at preventing catastrophic risks from transformative AI. Over the two years I’ve been doing this, I’ve noticed a number of projects that I wish more people would work on. So here’s my attempt at fleshing out ten of them. 🧵
History aside, what's noteworthy about substance of the AI Action Plan? A few things jumped out at me: