Shao-Hua Sun
@shaohua0116
Assistant Professor @ National Taiwan University (NTU) | CS Ph.D. @USC | Robot Learning, Reinforcement Learning, Program Synthesis | 台大電機系助理教授
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!

Wenhao @Stacormed talks about What language does VLAs ponder in? at the Programmatic Representations for Agent Learning workshop at #ICML2025!
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
Jason @JasonMa2020 talks about Foundation Reward Models for Robot Learning at the Programmatic Representations for Agent Learning workshop at #ICML2025!
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
Sheila @SheilaMcIlraith talks about Programmatic Reward Models: Exploiting Reward Function Structure to Help Agents Learn, Plan, and Remember at the Programmatic Representations for Agent Learning workshop at #ICML2025!
Dale talks about Large Language Models and Computation at the Programmatic Representations for Agent Learning workshop at #ICML2025!
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
Amy @yayitsamyzhang talks about Leveraging Programmatic Structure in Reinforcement Learning at the Programmatic Representations for Agent Learning workshop at #ICML2025!
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
Kicking off the Programmatic Representations for Agent Learning workshop at #ICML2025 with an invited talk by @animesh_garg!
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
Looking fwd to presenting this talk @Google next Thurs at noon. It will be live in person in Mountain View CA (not online) but is free and open to the public: How to Close the 100,000 Year “Data Gap” in Robotics rsvp.withgoogle.com/events/how-to-…
In an era of billion-parameter models everywhere, it's incredibly refreshing to see how a fundamental question can be formulated and solved with simple, beautiful math. - How should we orient a solar panel ☀️🔋? - Zero AI! If you enjoy math, you'll love this!
Kicking off #ICML2025 with the Generative AI meets Reinforcement Leaning tutorial by @yayitsamyzhang and @ben_eysenbach!

Excited to share Energy-Based Transformers (EBTs), which allows you to implement system 2 thinking in any modality! EBTs formulate reasoning as an energy optimization problem, allowing models to internally think without complexities like CoT or multiple recurrent latents.
How can we unlock generalized reasoning? ⚡️Introducing Energy-Based Transformers (EBTs), an approach that out-scales (feed-forward) transformers and unlocks generalized reasoning/thinking on any modality/problem without rewards. TLDR: - EBTs are the first model to outscale the…
Say ahoy to 𝚂𝙰𝙸𝙻𝙾𝚁⛵: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! 𝚂𝙰𝙸𝙻𝙾𝚁 ⛵ out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!
🧵1/ New paper! 📄 InnateCoder: Learning Programmatic Options with Foundation Models. This is Rubens Moraes' final chapter of his PhD thesis from Universidade Federal de Viçosa, Brazil, in collaboration with Quazi Sadmine and Hendrik Baier. arXiv: arxiv.org/abs/2505.12508