Kareem Ahmed
@KareemYousrii
Postdoc @ University of California, Irvine | PhD from CS@UCLA Neuro-Symbolic AI, Tractable Probabilistic Reasoning, Generative Models
Looking forward to all the interesting submissions!
Less than one week to submit abstracts for the Neurosymbolic Generative Models special track at @nesyconf 2025! Conference keynotes by @guyvdb , @tkipf & Deborah McGuinness. Details: 2025.nesyconf.org/nesy-generativ… 🚀
Very excited to announce the Neurosymbolic Generative Models special track at NeSy 2025! Looking forward to all the submissions!
Together with @KareemYousrii , we announce the Neurosymbolic Generative Models special track at the NeSy 2025 Conference 🎉 Call for Papers is live! 2025.nesyconf.org/nesy-generativ… See you in Santa Cruz in Sep 2025! @nesyconf
I'm a total fanboy of hybrid continuous-discrete neural architectures like this one 🚀
1/6 We're excited to share our #NeurIPS2024 paper: Probabilistic Graph Rewiring via Virtual Nodes! It addresses key challenges in GNNs, such as over-squashing and under-reaching, while reducing reliance on heuristic rewiring. w/ @ChendiQian @chrsmrrs @Mniepert Thread 🧵
I'll be at #EMNLP2024 next week presenting our work on tokenization alongside @renatogeh! Looking forward to meeting everyone and commiserating over just how hard the problem of tokenization is!
Where is the signal in LLM tokenization space? Does it only come from the canonical (default) tokenization? The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance! arxiv.org/abs/2408.08541 1/5
Very happy that our work on tokenization led by @renatogeh is now accepted at #EMNLP2024!
Where is the signal in LLM tokenization space? Does it only come from the canonical (default) tokenization? The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance! arxiv.org/abs/2408.08541 1/5
Check out our work on tokenization led by the amazing @renatogeh! It turns out you should consider *many* ways of tokenizing a sentence, which surprisingly gives rise to a #neurosymbolic problem formulation!
Where is the signal in LLM tokenization space? Does it only come from the canonical (default) tokenization? The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance! arxiv.org/abs/2408.08541 1/5
Very interesting paper studying *why* entropy minimization works! Entropy minimization is near and dear to my heart since my first paper was on *Neuro-Symbolic Entropy-Regularization*, or how to minimize entropy when you have domain knowledge! (proceedings.mlr.press/v180/ahmed22a/…)
Heads up #ICML attendees! @ori_press is presenting our paper on entropy minimization in OOD today in Hall C #910 from 13:30-15:00. Learn why entropy minimization is beneficial for OOD and how to bridge the gap between theory and practice. Don't miss it! @ylecun @MatthiasBethge
Sadly I won’t be at #ICML2024 but be sure to stop by our poster tomorrow (Tuesday) 11:30-1:30 to talk to Anji about our work on how to exploit structured sparsity to perform *efficient* probabilistic reasoning!
📢 Want to train your *tractable* probabilistic models (a.k.a. Probabilistic Circuits or PCs) lightning fast and at scale? We developed PyJuice (github.com/Tractables/pyj…), a lightweight and easy-to-use package based on PyTorch to train your PCs and do inference tasks on them!
hey Yann, I think you missed works that show that logical reasoning can be *exactly* compiled into differentiable computational graphs, eg., proceedings.neurips.cc/paper_files/pa… and when exact compilation is not easy there are plenty of approximate inference scheme that work like a charm!
I'm very happy to share PyJuice, the culmination of years of scaling up probabilistic & logical reasoning for neural networks and LLMs, all in a PyTorch framework! #NeuroSymbolicAI
📢 Want to train your *tractable* probabilistic models (a.k.a. Probabilistic Circuits or PCs) lightning fast and at scale? We developed PyJuice (github.com/Tractables/pyj…), a lightweight and easy-to-use package based on PyTorch to train your PCs and do inference tasks on them!
Come say "Hallo" at poster #75, Halle B from 16:30 if you want to talk about graph rewiring and overcoming limitations of GNNs 😄 #ICLR2024
Ever wanted to rewire a graph but you couldn't decide on what heuristic to use👀? Fear not! Our #ICLR24 PR-MPNN method eliminates the need to ask ChatGPT for a list of interesting graph properties and learns how to automatically do rewiring in an end-to-end manner 🤯. 🧵 1/n
I had a fantastic time working with everyone on this paper, pushing the frontier of gradient estimation as probabilistic inference for graph rewiring! Paper: arxiv.org/abs/2310.02156
Ever wanted to rewire a graph but you couldn't decide on what heuristic to use👀? Fear not! Our #ICLR24 PR-MPNN method eliminates the need to ask ChatGPT for a list of interesting graph properties and learns how to automatically do rewiring in an end-to-end manner 🤯. 🧵 1/n
A very cool application of probabilistic inference to weakly supervised learning! Vinay was absolutely wonderful to work with! He's currently looking for a PhD position and will be going to NeurIPS to present a poster on our work, make sure to talk to him!
Many approaches to weakly supervised learning are ad hoc, inexact, and limited in scope 😞. We propose Count Loss 🎉, a simple ✅, exact ✅, differentiable ✅, and tractable ✅ means of unifying count-based weakly supervised settings! See at NeurIPS 2023!