Elias Bareinboim
@eliasbareinboim
Professor of Causal Inference, Machine Learning, and Artificial Intelligence. Director, CausalAI Lab @ Columbia University.
If you are interested in how to improve decision-making, and more fundamentally, how causal inference & RL are related, check out our "Intro to Causal RL" paper: causalai.net/r65.pdf (with @sanghack & @JunzheZhang12). This follows our ICML tutorial: crl.causalai.net…
Somehow, the discussion on generalization got misconstrued as a contention between RCTs and OS. It isn't. Generalization is concerned with transporting both experimental and observational findings across heterogeneous populations, given whatever data are available. The three toy…
Generalizability of RCT findings is problematic. However, it can be partially remedied when OS findings are consulted. Are you familiar with the results reported here: ucla.in/2Jc1kdD? Or here: ucla.in/2N7S0K9?
One question I’ve received a few times, and would like to clarify about this work (causalai.net/r115.pdf), is: why do we need identification and the ctf-calculus? Isn’t the do-calculus enough? The answer to the first question is that identification is essential: estimating a…
Hi all, if you're attending ICML (Vancouver) or UAI (Rio de Janeiro), I'm happy to share some news from the lab! Please check it out -- and feel free to drop by or shoot me a line if any of it sounds intriguing. 1/5 "Counterfactual Graphical Models: Constraints and Inference"…
we are delighted to have you @eliasbareinboim! Looking forward!
5/5 Last but definitely not least, I’m honored to be giving a keynote on Wednesday (7/23) titled "Towards Causal Artificial Intelligence." For details, see: auai.org/uai2025/keynot… Here’s a short abstract: While many AI scientists and engineers believe we are on the verge of…
🚀 Excited to announce our workshop “Embodied World Models for Decision Making” at #NeurIPS2025! 🎉 Keynote speakers, panelists, and content are now live! Check out: 👉 embodied-world-models.github.io #WorldModels #RL #NeurIPS #NeurIPS2025 #neuripsworkshop #workshop
next is Elias Bareinboim (@eliasbareinboim) from Columbia University, who will discuss in his keynote talk about the recent ✨ progress toward building causally intelligent AI systems ✨ full abstract 👉 auai.org/uai2025/keynot…
time to announce our amazing keynote speakers! we start with Francesca Dominici (@francescadomin8) from Harvard University, who will talk about: ✨ AI's uncertain, double-edged role in the fight against climate change ✨ full abstract 👉 auai.org/uai2025/keynot…
Orthogonal to @f2harrell’s initial note: CBN is a layer 2 model that lets us answer interventional (layer 2) queries using layer 2 calculus (do-calculus) -- see the 2nd green row in the attached table. One recent result: we can now more precisely match the query, graph, and…
Going from "causal structure" to explanation is not trivial, because the latter is level-3 while the former is barely level-2 (given exp. data). But I agree that the phrase "prediction is explainable" is problematic.
Could causal reasoning be the next step toward building more robust, generalizable, and interpretable RL agents? To find out, you may wish to participate in the Causal Reinforcement Learning (CausalRL) Workshop, which will be held on August 5th, 2025, as part of the…
That's a good point! I wonder if the RL community (eg @RichardSSutton) is aware of the Ladder of Causation (described here causalai.net/r60.pdf), and whether it sees the interplay between causal knowledge and decision-making. The application is obvious: to move from…
Hi @NandoDF , one surprising result from CI in the last decade is that counterfactuals (level 3) of Pearl's Hierarchy can be used for decision-making and can lead to dominant strategies over essentially any available RL strategy (level 2), as discussed in Sec 7 (p. 114) in…
Hi @NandoDF , one surprising result from CI in the last decade is that counterfactuals (level 3) of Pearl's Hierarchy can be used for decision-making and can lead to dominant strategies over essentially any available RL strategy (level 2), as discussed in Sec 7 (p. 114) in…
Hi Hugo! Someone else also pointed out we need benchmarks. As with intelligence, many factors get lumped into consciousness. We need benchmarks for each of these factors, eg attention schemas, self-awareness, social awareness, ethics & morality (empathy, compassion), internal…
I understand that CI, from the 1970s until around 2010, was mostly focused on the challenge of moving from OBS to EXP worlds and controlling for confounding in this sense. However, it's an oversimplification to think about CI as solely about observational studies, as the…
Thanks for defending the honor of Causal Inference (CI) and reminding the zealots that: "CI is great bc the assumptions are right there and the ready can judge." We should also remind the uninitiated that the danger of confounding in OS varies from study to study, depending on…
Excited to advertise a postdoc to work with me and an excellent team at BR-UK applying causal modelling to behavioural research - please contact me for more info 😻
We're hiring! Join our team at University College London as a Research Fellow for BR-UK. This post will support a work package on Methods & Evidence Synthesis, working with @david_lagnado and our teams in Sheffield & Edinburgh Apply by 30th March here: ucl.ac.uk/work-at-ucl/se…
How to speed up the process, this is the question. Should we let statisticians move naturally to modern CI, or jolt them to hurry, thus risking making them more defensive and stubborn?
Indeed, nice reference, Boris! My interpretation of T. Kuhn's account is that operating based on consensus (or popularity) is a recipe for most historical disasters or tensions, to put it nicely. One interesting thing from the book, which I am re-reading, is that after the…
In a panel, last Thursday, I suggested that, in order to assess scale-independent limitations of LLMs, we feed them with toy examples that require knowledge of data-fusion theory, and see if/when they fail. @dwarkesh_sp question jolted me to realize that we do not need to resort…
Great question!!!
The true generative model is Nature -- a collection of causal mechanisms. Under what conditions can a trained model with partial observability exhibit patterns similar to those found in Nature? We explored this question with Bengio, Xia, and Lee in a NeurIPS-21 paper:…
💯 "synthetic data" only makes sense if the data generating model is a better model of reality than the model being trained. This only happens in very special cases (eg when first-principles simulators are available).