Yoshua Bengio
@Yoshua_Bengio
Working towards the safe development of AI for the benefit of all @UMontreal, @LawZero_ & @Mila_Quebec A.M. Turing Award Recipient and most-cited AI researcher.
Today marks a big milestone for me. I'm launching @LawZero_, a nonprofit focusing on a new safe-by-design approach to AI that could both accelerate scientific discovery and provide a safeguard against the dangers of agentic AI.
Every frontier AI system should be grounded in a core commitment: to protect human joy and endeavour. Today, we launch @LawZero_, a nonprofit dedicated to advancing safe-by-design AI. lawzero.org
Technology = power. AI is reshaping power — fast. Today’s AI doesn’t just assist decisions; it makes them. Governments use it for surveillance, prediction, and control — often with no oversight. Our new paper proposes some ML safeguards to resist AI-enabled authoritarianism:…
La communauté scientifique reste divisée sur les risques catastrophiques de l'IA, qu’il s’agisse de leur échéance, de leur gravité ou même de leur existence. Ces désaccords révèlent avant tout une profonde incertitude. Les données empiriques montrent toutefois une augmentation…
Doit-on s’inquiéter d’une IA rebelle? rc.ca/TM1wPQ
Major progress in AIxBio greatly increases the risk of deliberate or accidental release of harmful bioagents. This demands urgent attention, serious caution & decisive action. Read the statement I've signed with many other AI & life science researchers: nti.org/news/internati…
Rapid advances at the intersection of AI and the life sciences have the potential to transform public health and medicine: faster development of vaccines and treatments, earlier outbreak detection, and tools to improve health worldwide. 🧵
Delighted to be working with you again and embarking on this new adventure, Philippe. Welcome to the team!
We are thrilled to welcome Philippe Beaudoin to LawZero as Senior Director, Research. A seasoned researcher and entrepreneur, his experience will be invaluable as we advance our mission to build safe-by-design AI systems. Full press release: lawzero.org/en/news/lawzer…
Ravi de te retrouver et de commencer cette nouvelle aventure avec toi Philippe. Bienvenue dans l’équipe!
Nous sommes ravis d’accueillir Philippe Beaudoin à LoiZéro en tant que directeur principal, Recherche. Chercheur et entrepreneur d'expérience, il contribuera grandement à notre mission de développer systèmes d'IA sécuritaires. Lire l’annonce: lawzero.org/fr/nouvelles/l…
A simple AGI safety technique: AI’s thoughts are in plain English, just read them We know it works, with OK (not perfect) transparency! The risk is fragility: RL training, new architectures, etc threaten transparency Experts from many orgs agree we should try to preserve it:…
The future of AI governance may hinge on our ability to develop trusted and effective ways to make credible claims about AI systems. This new report expands our understanding of the verification challenge and maps out compelling areas for further work. ⬇️
Governing AI requires international agreements, but cooperation can be risky if there’s no basis for trust. Our new report looks at how to verify compliance with AI agreements without sacrificing national security. This is neither impossible nor trivial.🧵 1/
By advancing SB 53, California is uniquely positioned to continue supporting cutting-edge AI while proactively taking a step towards addressing the severe and potentially irreversible harms that frontier systems could cause.
I’m expanding my AI bill into a broader effort to boost transparency & advance an industrial policy for AI in CA. We need transparency & accountability to boost trust in AI & mitigate material risks. We also need to accelerate & democratize AI development. SB 53 does both.
Making God has secured interviews with 'Godfathers of AI', @GeoffreyHinton & @Yoshua_Bengio. Read more here: manifund.org/projects/creat…
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! We unpack a critical misconception in AI: models explaining their Chain-of-Thought (CoT) steps aren't necessarily revealing their true reasoning. Spoiler: transparency of CoT can be an illusion. (1/9) 🧵
📢 £18m grant opportunity in Safeguarded AI: we're looking to catalyse the creation of a new UK-based non-profit to lead groundbreaking machine learning research for provably safe AI. Learn more and apply by 1 October 2025: link.aria.org.uk/ta2-phase2-x
Thank you for your visit and interest in our work MEP @McNamaraMEP!
Thanks to @Mila_Quebec for the opportunity to visit and meet @Yoshua_Bengio again and hear his views on the current state of A.I. and his proposed response @LawZero_ yoshuabengio.org/2025/06/03/int…
The timeline and severity of major AI risks are still debated within the scientific community, but these disagreements reveal significant uncertainty. The fact that many credible experts, including independent ones, consider some catastrophic scenarios plausible should be enough…
As frontier AI systems increase in capability and agency, the risk of AI-driven cyberattacks will likely rise sharply. Tasks once done by elite hackers may soon be carried out autonomously, and this demands urgent attention.
1/ 🔥 AI agents are reaching a breakthrough moment in cybersecurity. In our latest work: 🔓 CyberGym: AI agents discovered 15 zero-days in major open-source projects 💰 BountyBench: AI agents solved real-world bug bounty tasks worth tens of thousands of dollars 🤖…
Enjoyed speaking with @SigalSamuel of @voxdotcom to mark the launch of @LawZero_. We discussed the motivation behind the project, its research direction, and the challenges and risks of increasingly capable and autonomous AI systems. Full article: vox.com/future-perfect…
The @ScienceBoard_UN Brief on the Verification of Frontier AI Models I led is now available! Trusted, visible, and confidential ways to verify claims about frontier AI could play a key role in mitigating geopolitical escalation and sharing AI’s benefits. Full document below ⬇️
📈 As AI advances rapidly, where does the science stand on AI verification? @YoshuaBengio leads a new @ScienceBoard_UN Brief on verifying frontier AI models, spotlighting tools to assess claims and boost global safety. 📘 Read more: bit.ly/4kMBBet
Les leaders canadiens de renommée mondiale dans les domaines de l'#IA, Pr @Yoshua_Bengio, et de l’#InformatiqueQuantique, Dr Martin Laforest, ont partagé avec la Table ronde des APD du G7 leurs points de vue sur ces technologies en rapide évolution. #G7viePrivée