Soroush H. Zargarbashi
@zargar_soroush
Trustworthy AI, Uncertainty Quantification, Research Intern @ @Apple, PhD candidate @cispa
🚨I’m more than happy to share our new work: A critical question for any second order uncertainty quantification is to ask “even if valid, what to do with it?”. Our answer is this work! We offer coverage guarantee per input, and return sets that are optimally efficient.
1/5 Ever wondered how to apply conformal prediction when there's epistemic uncertainty? Our new paper addresses this question!CP can benefit from models like Bayesian, evidential, and credal predictors to have better prediction sets,for instance, in terms of conditional coverage.
Plus: I am responding to a reviewer mentioning that they missed an entire section of our manuscript! In addition to the tone, there should be a guideline saying that: please read the paper *at least once* before writing the review!
Most #NeurIPS reviews are either generated, or worse, extremely hostile, baseless, and destructive In security conferences, one of the kind guidelines I appreciated was, write reviews as if they were addressed to you, or even, your most junior PhD student We need this spirit!
Anyone from #iran looking for a phd/postdoc/research internship in statistical learning theory, deep learning theory etc, contact me. Please retweet.
One day, may peace prevail everywhere 🌍✨ Humanity holds incredible potential when we work together, from curing disease to ending hunger and lifting one another up. Together, we can build a better world for all. #Peace
🧵 1/8 The Illusion of Thinking: Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really "thinking"? 🤔 Or are they just throwing more compute towards pattern matching? The new Large Reasoning Models (LRMs) show promising gains on math and coding benchmarks,…
The video presentations for the #ICLR2025 are now publicly available including our “Robust Conformal Prediction with a Single Binary Certificate” paper together with the poster. Feel free to visit it and let us know about your comments / questions. iclr.cc/virtual/2025/p…
1/5 Ever wondered how to apply conformal prediction when there's epistemic uncertainty? Our new paper addresses this question!CP can benefit from models like Bayesian, evidential, and credal predictors to have better prediction sets,for instance, in terms of conditional coverage.
Optimal Conformal Prediction under Epistemic Uncertainty. arxiv.org/abs/2505.19033
If you are at the #ICLR2025 conference, you can find our poster on Sat 26 Apr 10 a.m. +08 — 12:30 p.m with number 429. #conformal_prediction #robust_ml #ICLR #ICLR25
🚨 Robust conformal prediction is expensive as we need around 10000 forward passes per input. Or Is it? Checkout our ICLR2025 paper: openreview.net/forum?id=ltrxR… We extend conformal sets to worst case noise under any smoothing, with much less samples. Joint work with @abojchevski
Excited to share that I’ll be starting a research internship at @Apple beginning next month!
1/ Can Large Language Models (LLMs) truly reason? Or are they just sophisticated pattern matchers? In our latest preprint, we explore this key question through a large-scale study of both open-source like Llama, Phi, Gemma, and Mistral and leading closed models, including the…
Happy to announce NatPN at #ICLR2022 (Spotlight) ! - It predicts uncertainty for many supervised tasks like classification & regression. - It guarantees high uncertainty for far OOD. - It only needs one forward pass at testing time. - It does not need OOD data for training.
Move over Taylor there's another SVFT in town.
🎉 Thrilled to share that SVFT is officially accepted at #NeurIPS24! 🙌 See you all in Vancouver! w/ incredible co-authors @vijaylingam08 @VavreAditya @aneeshk1412 @gauthamkrishna_ Joydeep Ghosh @AlexGDimakis @eunsolc @abojchevski @sujaysanghavi