David Chiang
@davidweichiang
Associate Professor of Computer Science and Engineering at University of Notre Dame. Natural language processing, formal grammars, machine learning
Very excited about this work: deep results from logic shedding light on Transformers and the benefit of depth
New on arXiv: Knee-Deep in C-RASP, by @pentagonalize, Michael Cadilhac and me. The solid stepped line is our theoretical prediction based on what problems C-RASP can solve, and the numbers/colors are what transformers (no position embedding) can learn.
I'll be presenting our paper together with @mhahn29 on Saturday morning poster session. Feel free to reach out!
When do transformers length-generalize? Generalizing to sequences longer than seen during training is a key challenge for transformers. Some tasks see success, others fail — but *why*? We introduce a theoretical framework to understand and predict length generalization.
Last week, I had a fantastic time presenting our work on belief congruence in LLMs at the Midwest Speech and Language Days (MSLD) 2025, hosted at Notre Dame.💡 Grateful to the organizers for putting together such a great event! 🙏✨@ND_CSE
Do LLMs exhibit belief-based biases like humans do? We examine the presence of belief congruence in multi-agent systems and find that LLMs exhibit even stronger belief congruence than humans. Paper: arxiv.org/pdf/2503.02016 Code: github.com/MichiganNLP/Be… 🧵(1/n)