Xinyi Chen
@XinyiChen2
Looking forward to speaking at @PrincetonSML and next week about neural architectures inspired by dynamical systems: csml.princeton.edu/events/beyond-…
Thank you @max_simchowitz for the shoutout! Making ML more efficient by reasoning about dynamical systems is a really exciting direction, and I look forward to advancing more in this space!
As someone who loves dynamical systems and control, I've been really excited to see @XinyiChen2 and @HazanPrinceton 's recent papers making control work for deep learning! Very cool insights, both on the architecture and optimization side. I encourage you to check them out!…
🚨 @SCSatCMU PhD applications close tomorrow, December 11, at 3:00pm ET! I’m actively recruiting masters and PhD students interested in the theory and practice of decision making with generative models, especially for robotics, RL and world models! CMU is one of the most…
Very excited about our work on spectral transformers!
All you want to know about spectral transformers in one webpage, papers & code: (& we'll try to keep it updated!) sites.google.com/view/gbrainpri…
Want to learn about the math behind robot learning? I'll be presenting an invited talk on "Provable Guarantees for Generative Behavior Cloning" at 11:55am CEST at the 2024 ICML Workshop on Reinforcement Learning at Control (link in 🧵) icml.cc/virtual/2024/w…
I'll be at @icmlconf next week! Giving a plenary talk at the HiLD workshop and an oral on our recent paper (arxiv.org/abs/2405.19534) at the MHFAIA workshop! Pls reach out to chat if you're also interested in any of these topics! 😊
New work w/@sadhikamalladi, @lilyhzhang, @xinyichen2, @QiuyiRichardZ, Rajesh Ranganath, @kchonyc: Contrary to conventional wisdom, RLHF/DPO does *not* produce policies that mostly assign higher likelihood to preferred responses than to less preferred ones.
Open source code for spectral SSM is now available! github.com/google-deepmin… Thanks to our Google DeepMind Princeton team: @danielsuo @naman33k @XinyiChen2
most exciting paper *ever* from our @GoogleAI lab at @Princeton: @naman33k @danielsuo @XinyiChen2 arxiv.org/abs/2312.06837 *** Convolutional filters predetermined by the theory, no learning needed! ***
most exciting paper *ever* from our @GoogleAI lab at @Princeton: @naman33k @danielsuo @XinyiChen2 arxiv.org/abs/2312.06837 *** Convolutional filters predetermined by the theory, no learning needed! ***
Excited w. our first research in AI safety & alignment: A game theoretic approach for AI safety via debate: arxiv.org/abs/2312.04792 This is a collaboration with my student @XinyiChen2 , our alumna @_angie_chen from NYU, and Dean Foster:
See y'all at NeurIPS next week. Presenting Sketchy w @XinyiChen2 Jennifer Sun @_arohan_ @HazanPrinceton. High level blog post: vladfeinberg.com/2023/10/18/ske… Also, looking for student researchers for OCO🤝Control theory+applied internship! HMU @ NOLA
I will be at NeurIPS - 12th, 13th and will be hanging out at the posters, Sketchy: @FeinbergVlad @XinyiChen2 Jennifer Sun @HazanPrinceton neurips.cc/virtual/2023/p… SoNew: @Devvrit_Khatri @dvsaisurya @GuptaVineetG Cho-Jui Hsieh, @inderjit_ml neurips.cc/virtual/2023/p…
Happy to share a new blog post w. @XinyiChen2 on meta-optimization, and its relationship to adaptive gradient methods and parameter-free optimization! minregret.com/2023/05/15/met…