Nicolas Yax
@nicolas__yax
PhD student in AI and cognitive sciences. Investigating cognition of LLMs and developping tools for the study of LLMs at @ENS_ULM and @FlowersINRIA.
🔥Our paper PhyloLM got accepted at ICLR 2025 !🔥 In this work we show how easy it can be to infer relationship between LLMs by constructing trees and to predict their performances and behavior at a very low cost with @StePalminteri and @pyoudeyer ! Here is a brief recap ⬇️
We just opened a new (engineering) internship position in the @FlowersINRIA team with @pyoudeyer: docs.google.com/document/d/13V… We'll focus on developing our lamorel library, which has been central to our recent work on grounding embodied LLMs (more below 👇1/4)
I'm attending ICML 2025 this week in Vancouver where we're presenting our MAGELLAN paper along with @LorisGaven and @CartaThomas2! 📅 Come discuss at our poster session on July 17 at 11 am East Exhibition Hall A-B E-2803 Or reach out for a chat! x.com/CartaThomas2/s…
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
Introducing SOAR 🚀, a self-improving framework for prog synth that alternates between search and learning (accepted to #ICML!) It brings LLMs from just a few percent on ARC-AGI-1 up to 52% We’re releasing the finetuned LLMs, a dataset of 5M generated programs and the code. 🧵
New blog post ! What if LLM agents could learn by doing, not just by reading? 🤔 2024 was the year of "agentic AI"—systems that plan, act, and execute complex workflows autonomously. But current agents face critical limitations... 🧵
Curious about LLM interpretability and understanding ? We borrowed concepts from genetics to map language models, predict their capabilities, and even uncovered surprising insights about their training ! Come see my poster at #ICLR2025 3pm Hall 2B #505 !

🧠 One of the key limitation of LLMs today is their lack of metacognition: they were (mostly) not trained to know what they know or don't know, what they can or can't do. 🚀At @FlowersINRIA, we're proposing an approach to build metacognition into LLMs: MAGELLAN !
Enabling forms of metacognition in LLMs is a frontiers challenge in #AI We've made progress in this direction: 🧭MAGELLAN allows curiosity-driven LLMs to learn to predict and generalize their own learning progress, and navigate in very large spaces of goals 🚀 Details here 👇
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
🚀 Exciting Internship Opportunities for AI and CogSci Students🌟 Join @FlowersINRIA and work on these cool topics: 🔧 Curriculum learning of skill libraries in autotelic agents using LLMs and program synthesis wth @PourcelJulien 🎯 Balancing Exploration and Exploitation in…