Nisheet Patel
@nisheet0
Theoretical neuroscience | Reinforcement learning | Decision-making | Memory
Want your robot to learn agile skills without reward shaping or designing expert controllers? We propose WASABI which allows Solo, a quadruped robot to obtain highly dynamic skills (e.g. backflip) from only rough, partial, hand-held human demonstrations. sites.google.com/view/corl2022-…
Swipe your way through #ICML2025 with Axy: beta.axy-app.com/icml2025 The recommender learns your preferences in 5-10 interactions. 🔥 Pro-tip: If you search for a couple of relevant items and add them to your agenda first, the recommendations will be way better.
🚀 Axy is the official AI-companion app (in beta) for #COSYNE2025! 🧠 Discover relevant posters, 👥 find poster buddies, and ⏰ never miss important talks with our AI-powered recommendations. 🔍 Made by scientists, for scientists with ❤️. Try it now: 🔗 beta.axy-app.com/cosyne2025?ref… ✨
How does the brain control the numerous muscles of the body? Let's say you want to rotate two balls in your hand, how does your brain achieve that? Read our article in @NeuroCellPress to learn more! cell.com/neuron/fulltex…
We're super excited to launch Axy at the European Federation for Primatology's #EFP2024 conference 🦍🐵🦧🐒
We're excited to let you know that you can also access our programme via an app axy.up.railway.app - check it out!
Are you interested in motor skills, musculoskeletal control and reinforcement learning? Check out our manuscript: biorxiv.org/content/10.110…
Acquiring musculoskeletal skills with curriculum-based reinforcement learning biorxiv.org/cgi/content/sh… #biorxiv_neursci
We had great fun participating & writing the paper with all the winning teams and organizers-- check it out! I'm quite excited what one can learn about biological motor control based on the new simulators! Our team: @chiappa_alberto @pablo_tano8 @nisheet0 @pouget_alex
MyoChallenge 22 paper is out 🔥 Have a look at the smart solutions 🧠 of last year's winners Lots of good ideas in it for this year's manipulation challenge😈 ➡️ proceedings.mlr.press/v220/caggiano2…
MyoChallenge '22—a retrospective on progress, lessons learned & key takeaways 🫴🎲 Check it out on our new Medium account: 📽️Videos of the winning policies 🏆 Photos with winners from NeurIPS '22 🧑🏫Link to MyoSymposium speakers, talk medium.com/@myosuite/myoc…
🔥MyoSuite 1.4 released 🔥 sites.google.com/view/myosuite ➡️Validated upper extremity models with interacting exo-robots ➡️4000x faster-than-SOTA (suitable for RL) ➡️Full contact dynamics Extremely excited about release that took >1 year of dev & testing, and our growing community
Team stiff_finger (@pablo_tano8 @chiappa_alberto @nisheet0) receiving their award plaque and part of the organization team (@CaggianoVitt @GDurandau @Vikashplus ) during #NeurIPS2022
Congrats to the team! So proud of PhD students @chiappa_alberto, @pablo_tano8 and @nisheet0 and co-Alex: @pouget_alex
Team stiff_finger (@pablo_tano8 @chiappa_alberto @nisheet0) receiving their award plaque and part of the organization team (@CaggianoVitt @GDurandau @Vikashplus ) during #NeurIPS2022
A recently developed machine learning model has given scientists a new schema for predicting the scent of individual molecules: metabolism. quantamagazine.org/ai-model-links…
RL with KL penalties – a powerful approach to aligning language models with human preferences – is better seen as Bayesian inference. A thread about our paper (with @EthanJPerez and @drclbuckley) to be presented at #emnlp2022 🧵arxiv.org/pdf/2205.11275… 1/11
Discrete Factorial Representations as an Abstraction for Goal Conditioned RL Riashat Islam, Hongyu Zang, Alex Lamb, Kenji Kawaguchi, Xin Li, Romain Laroche, Yoshua Bengio arxiv.org/abs/2211.00247 NeurIPS'22
Training #ReinforcementLearning algorithms from scratch is computationally intensive and time consuming. We propose an alternate approach, Reincarnating RL, that integrates prior computation into the RL training workflow. Learn more and grab the code at goo.gle/3Ws2TLk