Yulu Qin
@yulu_qin
🐸 Check out our paper (to be presented at @emnlpmeeting) showing benefits of semantic training signals on hierarchical generalization! If I'm allowed to say this as an author: super cool results, interesting implications, and lots of exciting follow up directions
NEW PAPER! We (@najoungkim and I) find that training on mapping from form to meaning leads to improved hierarchical generalization.
📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! 👉 …vision-language-embodied-ai.github.io 🦾Co-organized with an incredible team → @fredahshi · @maojiayuan · @DJiafei · @ManlingLi_ · David Hsu · @Kordjamshidi 🌌 Why Space & SpaVLE? We…
Seeing an experiment and thinking "but have they tried X? what if we do Y?" is a key part of research and a start to new discoveries. RexBench tests if coding agents can implement new extensions. It complements recent evals (eg PaperBench from @OpenAI) on replication! See 👇
Can coding agents autonomously implement AI research extensions? We introduce RExBench, a benchmark that tests if a coding agent can implement a novel experiment based on existing research and code. Finding: Most agents we tested had a low success rate, but there is promise!
Can coding agents autonomously implement AI research extensions? We introduce RExBench, a benchmark that tests if a coding agent can implement a novel experiment based on existing research and code. Finding: Most agents we tested had a low success rate, but there is promise!
Exciting!
🚨 In 2025, @emnlpmeeting will take place in Suzhou, China from Nov. 5-9!
It was fun and I was still chewing😆
A bunch of amazing people and me @ Versailles!
Could we observe unsupervised alignment between vision and language in unimodal deep nets? What about nets trained on raw data from just a single child? Check out @CindyLuo_K's #CogSci2024 talk today 2:55 in Mees I (T.31.03), and paper escholarship.org/uc/item/7dz6b6…
Today @WenjieWLi introduces new AI benchmark for identifying Agency, Affiliation, Belief, and Intention, inspired by infant cognition. Check it out the poster (P2-E-244) at 1! Paper: escholarship.org/uc/item/5ft9x5…
Many thanks to @LakeBrenden and @wentaow10, check it out at 13:00-14:15 today!
What does a small language model learn (trained from scratch) given just slices of one child's linguistic input? Find out at @yulu_qin's #CogSci2024 poster today (P1-E-26). Paper here, escholarship.org/uc/item/998668…
1/ Today in Science, we train a neural net from scratch through the eyes and ears of one child. The model learns to map words to visual referents, showing how grounded language learning from just one child's perspective is possible with today's AI tools. science.org/doi/10.1126/sc…