Badr AlKhamissi
@bkhmsi
PhD @EPFL_en Ex @MetaAI, @SonyAI_global, @Microsoft MSc @CoCoNeuro_Gold BSc CS @AUC Egyptian 🇪🇬
“Israel has killed a classroom full of children every single day.” UNRWA Sam Rose tells BBC Radio 4. Children in #Gaza have been killed while sleeping, sheltering in schools, or queuing for water. Graphic by @TRTWorld
I'm really honored by this Quanta article on how we use NeuroAI models to understand the brain, as well as treat it for improved brain health. Thank you @ejbeyer!
Martin Schrimpf (@martin_schrimpf) trained an AI model to generate sentences which can activate or suppress neural activity in the reader’s brain. This can potentially help researchers treat depression, dyslexia and other brain-related conditions. quantamagazine.org/how-ai-models-…
Had a great time talking at the @LauzHack Deep Learning Bootcamp about the Mixture of Cognitive Reasoners paper! You can watch the full presentation here: youtu.be/jsNYzUKZtNE
Excited to be talking today at 5:00 PM CEST on Ploutos about our new paper: Mixture of Cognitive Reasoners, focused on building more brain-inspired AI models. Project Page: bkhmsi.github.io/mixture-of-cog… Would love to see you there!
📷 Featuring: 📷 @bkhmsi of @EPFL in a deep dive with 📷 @ceciletamura of @ploutosai 📷 July 16 at 5:00 PM CEST 📷 app.ploutos.dev/streams/logica…
Just updated the Egyptians in AI Research website, now featuring 188 incredible researchers! Great to see so many familiar and new faces added to the list. Let’s aim for 200! If you have suggestions for improving the site, I’d love to hear them! Website:…

🗒️Can we meta-learn test-time learning to solve long-context reasoning? Our latest work, PERK, learns to encode long contexts through gradient updates to a memory scratchpad at test time, achieving long-context reasoning robust to complexity and length extrapolation while…
Can LLMs learn to reason more like humans, deciding when to think hard and when to keep it simple? Inspired by metareasoning in cognitive science, the process of allocating mental effort based on cost-benefit tradeoffs, we introduce a novel Value of Computation reward to train…
🚀 New paper update! We’ve just released an updated version of “Rational Metareasoning for Large Language Models.” 🧠 Small tweaks to the training algorithm reduced reasoning tokens by 23–45%, while maintaining or improving task performance across diverse datasets.
For the love of god, can someone stop Israel from KILLING DOZENS OF PALESTINIANS EVERY SINGLE DAY. Will Western media even cover this??
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
Check out Badr's work on specializing experts in MoE-style models to individually represent the operation of different brain networks.
🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇
🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇