Umang Bhatt
@umangsbhatt
Assistant Professor @Cambridge_Uni. Fellow @Kings_College.
📣 Life update: I'm joining the University of Cambridge this fall as an Assistant Professor and as a Fellow in Computer Science at @Kings_College. Excited to (re)join the @Cambridge_Uni community!
Had a great time in Dakar for my second @DeepIndaba! It was fun to lead a practical session on responsible AI and meet so many wonderful people in Senegal 🇸🇳 🔥 #Indaba2024 #DLI2024



📢 New Preprint: What happens at a population level when humans learn from AI systems that themselves learned from us? We extended the classic "Rogers' Paradox" to explore human-AI learning networks—and found some surprising results! 🤖🧠 (Paper link in 🧵)
I’m excited to share new work from Datadog AI Research! We just released Toto, a new SOTA (by a wide margin!) time series foundation model, and BOOM, the largest benchmark of observability metrics. Both are available under the Apache 2.0 license. 🧵
CDS Faculty Fellow Umang Bhatt (@umangsbhatt), CDS visiting researcher Valerie Chen (@valeriechen_), & colleagues developed MODISTE, a system that personalizes when AI should assist you—learning your preferences from your behavior. Presented at AAAI 2025. nyudatascience.medium.com/ai-isnt-always…
I’m in Philadelphia this weekend at #AAAI2025 🔔 If you’re around, come by and chat with @valeriechen_ and me today (and tomorrow) about personalizing access to AI assistance! arxiv.org/abs/2304.06701
Catch me in a few days at #AAAI2025 🏛️🔔! I'll be presenting our work on “Learning Personalized Decision Support Policies” at: 📅Poster (Session 3) on March 1 @ 12:30p 🎙️Oral (Humans and AI 5) on March 2 @ 2:54p DM to chat about human-ai collab, coding agents, and better evals!
CDS Faculty Fellow Umang Bhatt (@umangsbhatt) explores when AI should step back for cultural fit. His "algorithmic resignation" concept highlights where human judgment should prevail. "To align AI, we must grasp cultural context of its use," he says. nyudatascience.medium.com/when-should-ai…
What would it take to build machines that partner with humans? Can we design AI assistants to be thought partners? In a new perspective, we describe how computational cognitive science can help build AI systems that learn and think *with* people! 🧠🫱🏿🫲🏾🤖 arxiv.org/abs/2408.03943
[New preprint!] What does it take to build machines that **meet our expectations** and **compliment our limitations**? In this Perspective, we chart out a vision, which engages deeply with computational cognitive science, to design truly human-centric AI “thought partners” 1/