Zhe Zeng
@zhezeng0908
Incoming Assist. Prof. @CS_UVA | Faculty fellow @NYU_Courant | CS Ph.D @UCLA | Neurosymbolic AI, Probabilistic ML, Constraints, AI4Science | https://zzeng.me/
Our ICML workshop (Beyond Bayes: Paths Towards Universal Reasoning Systems) is still accepting submissions until May 25. Aiming for a broad discussion across cog sci, neuro, AI. What's your perspective on reasoning? Please submit abstracts! beyond-bayes.github.io
Excited for this ICML workshop I'm helping to co-organize (with @ZennaTavares, @rosemary_ke, @blamlab, @TaliaRinger, @osazuwa, Nada Amin, Eli Bingham, and Armando Solar-Lezama). What's your perspective on reasoning? Please submit abstracts!
🗓️ Deadline extended: 💥2nd June 2025!💥 We are looking forward to your works on: 🔌 #circuits and #tensor #networks 🕸️ ⏳ normalizing #flows 💨 ⚖️ scaling #NeSy #AI 🦕 🚅 fast and #reliable inference 🔍 ...& more! please share 🙏
the #TPM ⚡Tractable Probabilistic Modeling ⚡Workshop is back at @UncertaintyInAI #UAI2025! Submit your works on: - fast and #reliable inference - #circuits and #tensor #networks - normalizing #flows - scaling #NeSy #AI 🕓 deadline: 23/05/25 👉 …able-probabilistic-modeling.github.io/tpm2025/
📢 I’m recruiting PhD students @CS_UVA for Fall 2025! 🎯 Neurosymbolic AI, probabilistic ML, trustworthiness, AI for science. See my website for more details: zzeng.me 📬 If you're interested, apply and mention my name in your application: engineering.virginia.edu/department/com…
🚨🚨 We are hiring! RT appreciated! Prof. Rui Song (song-ray.github.io) and I will recruit post-doc scientists through Amazon’s post-doc program (amazon.science/postdoctoral-s…).
Proposing Ctrl-G, a neurosymbolic framework that enables arbitrary LLMs to follow logical constraints (length control, infilling …) with 100% guarantees. Ctrl-G beats GPT4 on the task of text editing by >30% higher satisfaction rate in human eval. arxiv.org/abs/2406.13892
Very excited about this work! If you are an LLM researcher frustrated by long wait times on generations, I highly recommend you to check out prepacking.
🚨LLM RESEARCHERS🚨Want a free boost in speed and memory efficiency for your HuggingFace🤗LLM with ZERO degradation in generation quality? Introducing Prepacking, a simple method to obtain up to 6x speedup and 16x memory efficiency gains in prefilling prompts of varying lengths.…
Using probabilistically sound objectives improves #WeaklySupervisedLearning 🤩 We’ll present this work at #NeurIPS in person and be happy to chat!
Many approaches to weakly supervised learning are ad hoc, inexact, and limited in scope 😞. We propose Count Loss 🎉, a simple ✅, exact ✅, differentiable ✅, and tractable ✅ means of unifying count-based weakly supervised settings! See at NeurIPS 2023!
I will be hiring through #ELLIS this year: 3 fully-funded PhD positions for troublemakers in #ML #AI who want to design the next gen of #probabilistic #models and #programs that are provably #reliable and #efficient Join april-tools.github.io @InfAtEd Email me! Please share!
The portal is open: Our #ELLISPhD Program is now accepting applications! Apply by November 15 to work with leading #AI labs across Europe and choose your advisors among 200 top #machinelearning researchers! #JoinELLISforEurope #PhD #PhDProgram #ML ellis.eu/news/ellis-phd…
LAFI@POPL 2024 Call for papers is out! Submit your probabilistic and/or differentiable programming extended abstracts (deadline Oct 27 AoE)! popl24.sigplan.org/home/lafi-2024
I have fully-funded PhD positions (3.5 yrs) for troublemakers in #ML #AI who want to design the next gen of #probabilistic #models and #programs that are provably #reliable and #efficient Join @InfAtEd @EdinburghUni Email me! ✉️ Please share! 🔁 Apply 👉nolovedeeplearning.com/buysellexchang…
Can we enforce a k-subset constraint in neural networks? 🤔 Our #ICLR work answers SIMPLE 😎 a gradient estimator that allows to differentiably learn k-subset distributions! Sadly we don’t join #ICLR23 in person but feel free to check out our work and any thoughts are welcome!
We want a nn's output to depend on a sparse set of features, for explainability and regularization. Sampling? non-differentiable 😞 We propose SIMPLE, a gradient estimator for the k-subset distribution w/ lower bias and variance than SoTA😉. At ICLR 2023🥳 arxiv.org/abs/2210.01941
Get in touch if you are interested in working with me on reliable and efficient probabilistic #modeling and #reasoning of complex #graphs such as #molecules, #proteins and interaction #networks! @EdinburghUni @InfAtEd and @BioMedAI_CDT offer lots of opportunities for #graphML
📣 Recruitment for 2023 entry is now open for @BioMedAI_CDT 5th cohort of CDT students. 4-year PhD studentships in Biomedical AI are fully funded and open to students in the UK and internationally. Apply by 13 January: edin.ac/3gyv2A0
“Inference for hybrid programs has changed dramatically with the introduction of Weighted Model Integration.” 💥🤩💥
Awesome! There's gonna be a 2nd edition of the #probabilistic #logic #programming book by @rzf! A boon for the #NeSy and #PPL communities💥 👉ml.unife.it/plp-book/ A whole new chapter about reasoning in hybrid systems, with even a primer on #weighted #model #integration!
Let's go beyond the usual inference tasks in probabilistic models.. 🧐 How to compute queries like Pr(E > MC² | C)? 🤨 Why should we even care? Find it out @IJCAIconf !
#IJCAI2022Tutorials Hybrid Probabilistic Inference with Algebraic and Logical Constraints 🗣️Presenters: @paolo_morettin, Pedro Zuidberg Dos Martires, @samuelkolb @andrea_whatever ➡dtai.cs.kuleuven.be/tutorials/wmit… #IJCAI2022 #Vienna
"instead of learning to emulate the correct reasoning function, the BERT model has in fact learned to make predictions leveraging statistical features in logical reasoning problems." 💥💥💥 very interesting work!👇
Can language models learn to reason by end-to-end training? We show that near-perfect test accuracy is deceiving: instead, they tend to learn statistical features inherent to reasoning problems. See more in arxiv.org/abs/2205.11502 @LiLiunian @TaoMeng10 @kaiwei_chang @guyvdb
I have fully-funded PhD positions for troublemakers in #ML #AI at @ancAtEd @InfAtEd @EdinburghUni who want to design the next gen of #probabilistic models and programs that are provably reliable and efficient. Email me! ✉️ Please share! 🔃 👉nolovedeeplearning.com/buysellexchang…