Mayur Naik
@AI4Code
Misra Family Professor @CIS_Penn. I do research on neurosymbolic AI and cybersecurity.
Foundation models can now perform many reasoning tasks via prompting alone. So do we still need to train neuro-symbolic systems? Our position paper argues that neuro-symbolic prompting, not training, is the path to generalizable and interpretable reasoning.
🧠 Foundation models are reshaping reasoning. Do we still need specialized neuro-symbolic (NeSy) training, or can clever prompting now suffice? Our new position paper argues the road to generalizable NeSy should be paved with foundation models. 🔗 arxiv.org/abs/2505.24874 (🧵1/9)
Swing by our poster session today at 11 if you're at ICML to learn more about speeding up neurosymbolic learning! We will be in the East Exhibition Hall A-B, # E-2003
We are excited to share Dolphin, a programmable framework for scalable neurosymbolic learning, to appear at ICML 2025! Links to paper and code in thread below 👇
Very much enjoyed advocating for symbolic reasoning for Trustworthy AI in my NSF CISE lecture, the recording is now available at nsf.gov/events/neurosy…
Congratulations to Dr. Ziyang Li (@_ziyang_) on defending his dissertation today! Titled "Neurosymbolic Programming in Scallop: Design, Implementation, and Applications", this dissertation proposed Scallop, a unified programming system for combining the otherwise complementary…


Looking forward to discuss the promise of neurosymbolic approaches to trustworthy AI at @NSF CISE nsf.gov/events/neurosy…
One of the most effective things the U.S. or any other nation can do to ensure its competitiveness in AI is to welcome high-skilled immigration and international students who have the potential to become high-skilled. For centuries, the U.S. has welcomed immigrants, and this…
I am alarmed by the proposed cuts to U.S. funding for basic research, and the impact this would have for U.S. competitiveness in AI and other areas. Funding research that is openly shared benefits the whole world, but the nation it benefits most is the one where the research is…
We are excited to share Dolphin, a programmable framework for scalable neurosymbolic learning, to appear at ICML 2025! Links to paper and code in thread below 👇
We can’t thank @awscloud enough for the support! We are excited to see the developments in our students research!
With $840K in funding from @awscloud, @PennAsset is supporting 12 Ph.D. students conducting cutting-edge research in AI safety, robustness and interpretability. bit.ly/422Nfeo #AIMonth2025 #TrustworthyAI
Founders who were PhD or post-doc in my lab at Berkeley, **largely funded by NSF / DoD grants**, start-up, market cap (collected by OpenAI Deep Research)
🌟 We happily announce the ACM SIGSOFT Awards 2025 🌟 -> more details in the following posts. 🤝 Congratulations to all winners for their significant contributions and a big thanks to all colleagues who supported us in the selection committees!
Updating real-world large legacy projects like binutils? Meet (arxiv.org/abs/2501.14257) C2SaferRust: leveraging program analysis & LLMs to create idiomatic, safer Rust with (↓38%) raw pointers & (↓28%) unsafe code while preserving functionality 🚀 #rustlang #AI4code #AIAgent
Stop by on Thursday evening poster session at #NeurIPS2024 to learn about our work on how to integrate black box components in learning pipelines
Introducing neural programming: end-to-end learning of neural models composed with any black-box program, even those that call GPT-4. Paper: arxiv.org/abs/2406.06246 Blog: debugml.github.io/neural-program… Code: github.com/alaiasolkobres…