Joshua Ong @ ACL2025
@joshuaongg21
Visiting Researcher @EdinburghNLP | PhD Student @imperialcollege LLM Reasoning | Autoformalisation | Neurosymbolic AI
'Theorem Prover as a Judge for Sythetic Data Generation' has been accepted to ACL (Main) 🚀. Do check us out at July 30th (Wednesday) 11:00- 12:30pm at Hall 4/5! A huge thank you to my amazing collaborators: Shay @GiwonHong413849 @WendaLi8 📝: aclanthology.org/2025.acl-long.…

My Ph.D. focuses on understanding, developing, and applying retrieval-augmented language models (e.g., RAG) to make LMs more reliable, efficient, and adaptable. 📄 Thesis: akariasai.github.io/assets/pdf/aka… 🎥 Video from my defense: youtu.be/qnWyU9zryao?fe…
Notable mention -- Joshua Ong (@joshuaongg21), main author of "Theorem Prover as a Judge for Synthetic Data Generation" (arxiv.org/abs/2502.13137), just finished his BSc in Maths at Edinburgh, and he is now starting a PhD with @e_giunchiglia at Imperial! Keep him on your radar 🚀
The amazing folks at @EdinburghNLP will be presenting a few papers at ACL 2025 (@aclmeeting); if you're in Vienna, touch base with them! Here are the papers in the main track 🧵
Slides for my lecture “LLM Reasoning” at Stanford CS 25: dennyzhou.github.io/LLM-Reasoning-… Key points: 1. Reasoning in LLMs simply means generating a sequence of intermediate tokens before producing the final answer. Whether this resembles human reasoning is irrelevant. The crucial…
Lovely to see the impressive performance of the Seed Prover developed by the ByteDance Seed team at IMO 2025 — achieving a silver-level score (30 out of 42) within three days, and reaching (35 out of 42) with extended compute time. leanprover.zulipchat.com/#narrow/channe…
Some updates 🚨 I finished my Ph.D at @uwcse in June 2025! After a year at AI2 as a Research Scientist, I am joining CMU @LTIatCMU & @mldcmu (courtesy) as an Assistant Professor in Fall 2026. The journey, acknowledgments & recruiting in 🧵
We hosted a talk by @joshuaongg21, Visiting Researcher at @EdinburghNLP and incoming PhD Student at @imperialcollege @ImperialX_AI, on applying symbolic reasoning and theorem-prover autoformalisation in mathematical reasoning. Watch here: youtu.be/TuXu7Hp4HE8?si…. #NECLabs #LLM
It was pleasure to host this great talk by Joshua @joshuaongg21 on Autoformalisation and Symbolic Reasoning for Mathematical Reasoning. Check it out on our YouTube channel 👇
We hosted a talk by @joshuaongg21, Visiting Researcher at @EdinburghNLP and incoming PhD Student at @imperialcollege @ImperialX_AI, on applying symbolic reasoning and theorem-prover autoformalisation in mathematical reasoning. Watch here: youtu.be/TuXu7Hp4HE8?si…. #NECLabs #LLM
🚨 New Paper 🧵 How effectively do reasoning models reevaluate their thought? We find that: - Models excel at identifying unhelpful thoughts but struggle to recover from them - Smaller models can be more robust - Self-reevaluation ability is far from true meta-cognitive awareness
🤯 MIND-BLOWN! A new paper just SHATTERED everything we thought we knew about AI reasoning! This is paradigm-shifting. A MUST-READ. Full breakdown below 👇 🧵 1/23
Reasoning models are quite verbose in their thinking process. Is it any good? We find out that it enables reasoning models to be more accurate in telling what they know and don’t know (confidence)! Even non-reasoning models can do it better if they mimic the verbose reasoning! 👀
🙁 LLMs are overconfident even when they are dead wrong. 🧐 What about reasoning models? Can they actually tell us “My answer is only 60% likely to be correct”? ❗Our paper suggests that they can! Through extensive analysis, we investigate what enables this emergent ability.