Sanchaita Hazra
@hsanchaita
Ph.D. in Economics @UUtah | AIxEcon, Behavioral Science, Experimental Econ | Prev at ISI Kolkata
Incredible feeling with us getting the Outstanding Paper Award at the #ICML2025 Cheers @mbodhisattwa @TuhinChakr 🚀🚀 Thanks so much @icmlconf
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
To predict how AI will impact jobs, co-director @davidautor explains, we need to understand how it changes the expertise required for an occupation. "It's very possible that AI will change who can do certain types of work."
I think turning everything "socially undesirable" into a "behavioral mistake that needs correcting" or an inference that "people are stupid and don't know what they want or need" is misguided. I guess this is a rather boring economist opinion...
🏆 #ICML2025 Best Paper Award: AI Safety Should Prioritize the Future of Work 📄 Paper: arxiv.org/abs/2504.13959 🎉 Congratulations to Sanchaita Hazra @hsanchaita, Bodhisattwa Prasad Majumder @mbodhisattwa, and Tuhin Chakrabarty @TuhinChakr for winning the Outstanding Award —…
Happy to see more traction on this research agenda, now getting bolstered with both anecdotal and theoretical evidence. Our ICML paper (arxiv.org/pdf/2504.13959) discusses how AI augmentation impacts the future of work.
AI can, and should, augment our thinking, not merely match and replace it.
Ending the day, seeing NBER working papers are written with research support from GPT-o3 and Claude. Don't know if I should feel ecstatic or baffled.
Excited to share what I have been focusing on this year! Inference-time search to optimize Bayesian surprise pushes us towards long-horizon discovery! Introducing "AutoDS": Autonomous Discovery via Surprisal. "It can not only find the diamond in the rough, but also can rule out…
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
Congratulations @dhruvagarwal17, @mbodhisattwa and team! Excited for the upcoming user study. 🙈
Great science starts with great questions. 🤔✨ Meet AutoDS—an AI that doesn’t just hunt for answers, it decides which questions are worth asking. 🧵
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade. In a new AI Snake Oil essay, @random_walker and I argue that…
🚨New pre-print!🚨 “Understanding Trust in AI as an Information Source: Cross-Country Evidence”, joint w @m_serra_garcia! Coupling experimental data from 2900 participants & 11 countries with WVS data, we provide novel evidence of individuals' trust in LLMs as info sources.
Big implications for "automated science": As LLL-based tech becomes incorporated into scientific workflow, questions and methods that have more training data will be focused on more, while less developed or new areas get neglected.
AI is very vulnerable to The McNamara Fallacy: Step 1: [Train on] what can be easily measured Step 2: Disregard that which cannot be measured easily Step 3: Presume that which cannot be measured easily isn’t important Step 4: Say that which can’t be easily measured doesn’t exist
This is super cool! Been thinking about this for a while. The real ex-risk is all of us turning into Wall-E people.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
🏆 Join us for our oral presentation today in West Ballroom A at 3:30 PM, and the poster session at 4:30 PM in East Exhb. Hall A-B # E-500! Sad that @mbodhisattwa and I could not travel to @icmlconf, thanks @TuhinChakr for representing! Feel free to send comments our way.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
Cheers to our work getting the Outstanding Paper Award at #ICML2025! Check out our oral presentation tomorrow at the position paper track! @hsanchaita @TuhinChakr @allen_ai ✨✨
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
Honored to get the outstanding position paper award at @icmlconf :) Come attend my talk and poster tomorrow on human centered considerations for a safer and better future of work I will be recruiting PhD students at @stonybrooku @sbucompsc coming fall. Please get in touch.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
xAI launched Grok 4 without any documentation of their safety testing. This is reckless and breaks with industry best practices followed by other major AI labs. If xAI is going to be a frontier AI developer, they should act like one. 🧵
Sadly, both @hsanchaita and I will be missing @icmlconf (due to visa reasons) for our Oral presentation, but catch @TuhinChakr presenting our position paper. If you have questions, thoughts, or follow-ups, please don't hesitate to send them our way! 📧 paper & review:…
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
I've always felt uncomfortable with framing AI risk around the actions of "malicious actors". Because sometimes the malicious actor is the company that built the thing & a model is causing harm because it was successfully steered into doing exactly what its creators wanted it to.
Really valuable and interesting work!
🚨New pre-print!🚨 “Understanding Trust in AI as an Information Source: Cross-Country Evidence”, joint w @m_serra_garcia! Coupling experimental data from 2900 participants & 11 countries with WVS data, we provide novel evidence of individuals' trust in LLMs as info sources.