Bodhisattwa Majumder
@mbodhisattwa
Research @allen_ai. AI x Science, Agents. Leading Data-driven Discovery. PhD @ucsd_cse, @AdobeResearch Fellow. Prev @googleai @metaai.
Excited to share what I have been focusing on this year! Inference-time search to optimize Bayesian surprise pushes us towards long-horizon discovery! Introducing "AutoDS": Autonomous Discovery via Surprisal. "It can not only find the diamond in the rough, but also can rule out…
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
AutoDS? x.com/mbodhisattwa/s…
4. Question Selection > Problem Solving "It's harder to come up with a really good conjecture than it is to solve it." Takeaway: Focus on asking the right questions, not just finding answers. The person who identifies the breakthrough question often matters more than who…
Meta's AI Research Agents for Machine Learning (on MLE-bench; arxiv.org/pdf/2507.02554) resonates with AutoDS (arxiv.org/pdf/2507.00310), which uses MCTS to explore the best next ideas/hypotheses.
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
🏆 #ICML2025 Best Paper Award: AI Safety Should Prioritize the Future of Work 📄 Paper: arxiv.org/abs/2504.13959 🎉 Congratulations to Sanchaita Hazra @hsanchaita, Bodhisattwa Prasad Majumder @mbodhisattwa, and Tuhin Chakrabarty @TuhinChakr for winning the Outstanding Award —…
Happy to see more traction on this research agenda, now getting bolstered with both anecdotal and theoretical evidence. Our ICML paper (arxiv.org/pdf/2504.13959) discusses how AI augmentation impacts the future of work.
AI can, and should, augment our thinking, not merely match and replace it.
Ending the day, seeing NBER working papers are written with research support from GPT-o3 and Claude. Don't know if I should feel ecstatic or baffled.
Pretty bullish on this type of systems
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
"writing is not only about reporting results; it also provides a tool to uncover new thoughts and ideas. Writing compels us to think"
In case you are wondering, how we research with AutoDS. Behind the scenes, @dhruvagarwal17 is chilling since AutoDS is doing the research for him. 👽
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
AutoDS shows how AI can turbo‑charge discovery. 🚀 📚 Read more in the blog: allenai.org/blog/autods 📝 Check out the paper: arxiv.org/pdf/2507.00310 💻 Try AutoDS for yourself: github.com/allenai/autods
Congratulations @dhruvagarwal17, @mbodhisattwa and team! Excited for the upcoming user study. 🙈
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
Does trust in LLM reflect people's generalized trust in others, trust in info sources, or is it a distinct construct? We compare measures of trust in the World Value Survey to trust in LLM. Trust in LLM differs from generalized trust. Trust in LLM is strongly correlated to trust…
We need algorithms/agents that encourage open-endedness, focus on long tail, honor research mission irrespective of their (un)commonness. Concurrently, remain vigilant from AI making quasi-discoveries — if not us, who then?
Big implications for "automated science": As LLL-based tech becomes incorporated into scientific workflow, questions and methods that have more training data will be focused on more, while less developed or new areas get neglected.
Big implications for "automated science": As LLL-based tech becomes incorporated into scientific workflow, questions and methods that have more training data will be focused on more, while less developed or new areas get neglected.
AI is very vulnerable to The McNamara Fallacy: Step 1: [Train on] what can be easily measured Step 2: Disregard that which cannot be measured easily Step 3: Presume that which cannot be measured easily isn’t important Step 4: Say that which can’t be easily measured doesn’t exist
This is super cool! Been thinking about this for a while. The real ex-risk is all of us turning into Wall-E people.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
🏆 Join us for our oral presentation today in West Ballroom A at 3:30 PM, and the poster session at 4:30 PM in East Exhb. Hall A-B # E-500! Sad that @mbodhisattwa and I could not travel to @icmlconf, thanks @TuhinChakr for representing! Feel free to send comments our way.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.