Lorenzo Loconte
@loreloc_
PhD Student @ University of Edinburgh
We learn more expressive mixture models that can subtract probability density by squaring them 🚨We show squaring can reduce expressiveness To tackle this we build sum of squares circuits🆘 🚀We explain why complex parameters help, and show an expressiveness hierarchy around🆘
EurIPS is coming! 📣 Mark your calendar for Dec. 2-7, 2025 in Copenhagen 📅 EurIPS is a community-organized conference where you can present accepted NeurIPS 2025 papers, endorsed by @NeurIPSConf and #NordicAIR and is co-developed by @ELLISforEurope eurips.cc
Spotlight poster coming soon at #ICML2025 @icmlconf! 📌East Exhibition Hall A-B E-1806 🗓️Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT 📜arxiv.org/pdf/2410.12537 Let’s chat! I’m always up for conversations about knowledge graphs, reasoning, neuro-symbolic AI, and benchmarking.
🚨Is complex query answering really complex?🚨 unfortunately not! the current benchmarks boil down to link prediction 98% of the time... how to fix this??? 👇👇👇 📜arxiv.org/abs/2410.12537 with @c_gregucci @BoXiongs @loreloc_ @PMinervini @ststaab
🧵Why are linear properties so ubiquitous in LLM representations? We explore this question through the lens of 𝗶𝗱𝗲𝗻𝘁𝗶𝗳𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆: “All or None: Identifiable Linear Properties of Next-token Predictors in Language Modeling” Published at #AISTATS2025🌴 1/9
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning 🚀 Read more 👇
Just under 10 days left to submit your latest endeavours in ⚡#tractable⚡ probabilistic models❗ Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
the #TPM ⚡Tractable Probabilistic Modeling ⚡Workshop is back at @UncertaintyInAI #UAI2025! Submit your works on: - fast and #reliable inference - #circuits and #tensor #networks - normalizing #flows - scaling #NeSy #AI 🕓 deadline: 23/05/25 👉 …able-probabilistic-modeling.github.io/tpm2025/
In LoCo-LMs, we propose a neuro-symbolic loss function to fine-tune a LM to acquire logically consistent knowledge from a domain graph, i.e. wrt. to a set of logical consistency rules. @looselycorrect @tetraduzione arxiv.org/abs/2409.13724
We developed a library to make logical reasoning embarrassingly parallel on the GPU. For those at ICLR 🇸🇬: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
🚨New at #ICLR: we introduce the first ever 𝐥𝐚𝐲𝐞𝐫 that makes 𝐚𝐧𝐲 neural network 𝐜𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐭 𝐛𝐲 𝐝𝐞𝐬𝐢𝐠𝐧 with constraints expressed as 𝐝𝐢𝐬𝐣𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 𝐨𝐟 𝐥𝐢𝐧𝐞𝐚𝐫 𝐢𝐧𝐞𝐪𝐮𝐚𝐥𝐢𝐭𝐢𝐞𝐬—even if they define 𝐧𝐨𝐧-𝐜𝐨𝐧𝐯𝐞𝐱 𝐬𝐩𝐚𝐜𝐞𝐬!
hi, we found problematic benchmarks in complex query answering as well! arxiv.org/abs/2410.12537 @tetraduzione @Mniepert @c_gregucci @chrsmrrs @michael_galkin
Our paper "Low-rank finetuning for LLMs is inherently unfair" won a 𝐛𝐞𝐬𝐭 𝐩𝐚𝐩𝐞𝐫 𝐚𝐰𝐚𝐫𝐝 at the @RealAAAI colorai workshop! #AAAI2025 Congratulations to amazing co-authors @nandofioretto @WatIsDas @CuongTr95450563 and M. Romanelli 🥳🥳🥳
Fine-tuning your LLM with LoRA for critical areas like ⚖️ criminal justice, 🏥 healthcare, or 💼 hiring? ⚠️ Think again! ⚠️ 🚨 We found that LoRA can amplify #AI #harms: ❗️False sense of #safety #alignment 🚫Increased #unfairness and #bias, hitting minority groups the hardest
We all know backpropagation can calculate gradients, but it can do much more than that! Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
Circuits use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of the output? Check out our poster "On the Relationship between Monotone and Squared Probabilistic Circuits" at AAAI 2025 **today**: 12:30pm-14:30pm #841.
We are going to present our poster "Sum of Squares Circuits" at AAAI in Philadelphia today Hall E 12:30pm-14:00pm poster #840 We trace expressiveness connections of different types of additive and subtractive deep mixture models and tensor networks 📜 arxiv.org/abs/2408.11778
We learn more expressive mixture models that can subtract probability density by squaring them 🚨We show squaring can reduce expressiveness To tackle this we build sum of squares circuits🆘 🚀We explain why complex parameters help, and show an expressiveness hierarchy around🆘
🔥 Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! 🎉 📜 Paper: arxiv.org/pdf/2412.13023 💻 Code: github.com/ML-KULeuven/ne… 🧵⬇️