Aidan Homewood
@adnhw
Risk Management @GovAI_ 🇳🇿
@OpenAI and @MistralAI said they will sign the EU AI Code of Practice. This puts pressure on @AnthropicAI, @GoogleDeepMind, and @Microsoft.
Oversight and control are related tools in AI safety: how do you tell them apart? Pleased to have contributed to this paper. Check out @davidmanheim's thread for more⬇️
AI developers often say “don’t worry, the systems have human oversight.” But for many current AI systems, meaningful oversight is impossible! New paper (with @adnhw): “Limits of Safe AI Deployment” 🧵
Three weeks ago a car bomb exploded outside an IVF clinic in California, injuring four people. Now court documents against his accomplice show the terrorist asked AI to help build the bomb. A thread on what I think those documents do and don't show 🧵…
FBI says Palm Springs bombing suspects used AI chat program to help plan attack cnbc.com/2025/06/04/fbi…
I'd like to see formal statistical models of AI safety defense in depth. One might assume that each layer is statistically independent of the others, leading to an arbitrarily low risk overall. But often the layers are not independent and do not target the same attack vectors.
Can we massively scale up AI alignment research by identifying subproblems many people can work on in parallel? UK AISI’s alignment team is trying to do that. We’re starting with AI safety via debate - and we’ve just released our first paper🧵1/
Excited to have joined @GovAI_ as a Research Scholar! I'm now working with @jonasschuett, @NoemiDreksler, and Sophie Williams on the Risk Management Team 🤩 I had a fantastic time working with Jonas during the GovAI Winter Fellowship. Stay tuned for our first paper 👀

Really excited to release this new paper on AI benefit sharing! I think this topic -- ensuring that the economic and societal benefits of advanced AI are widely accessible internationally -- is going to be an increasingly important challenge as AI advancements continue.
Applications open for the 2025 Q1 Research Fellowship! 🗓 Feb 3 - April 4, 2025 ➡️ 9-week program with expert mentors 📍 Co-work at London Initiative for Safe AI ➡️ £5000 stipend + meals, travel, housing & compute costs 🔗 Link in bio & thread Application Deadline: Nov 21, 2024
I finished reading the AI Snake Oil book by @sayashk and @random_walker. Overall, there were a lot of nice things about it, but in particular, I think that there was some poor scholarship on the possibility and urgency of catastrophic AI risks. --- Longform post below. I'll…
As AI progresses, developers and their host governments will accrue enormous benefits. Could they strategically share these benefits to unlock crucial international agreements on AI development and deployment? I think so and make the case in a new AI governance essay (1/3)
Based on recent reporting, I now think our previous estimate of US AI chips smuggled to China was an underestimate. I think the rate right now is more like >100k/year. If AI capabilities keep growing, we'll look back on this as a massive own goal. Here’s how we can fix it…
You need 300mg of caffeine to get a python script from copilot? Wake up.