Leping Qiu
@LepingQ
HCI Research | Incoming PhD Student @UofT DGP Lab | Prev. @Tsinghua_Uni | 🦋 http://bsky.app/profile/lepingqiu.com
Take hand-written notes in mixed reality! Our #CHI2025 paper "MaRginalia" explores how mixed reality can better support university students’ digital note-taking during in-person lectures. Students can stay engaged without needing to look down at their devices. #HCI #XR 🧵 (1/8)

The #DynamicAbstractions Reading Group returns this Friday! 🎉 📍Topic: Simulations and Understanding 🗓️ Date & Time: June 27, 12pm EST / 9am PST 🎙️ Speaker: Andy Matuschak (andymatuschak.org) ✉️ Full letter: buttondown.com/dynamic_abstra… 🔁 Please #RT to share! 🧵 More:
widgets are coming to visionOS — with a new three-dimensional style. check out our session with @moritzvv to bring your existing widgets to visionOS and design native ones to take full advantage of the platform’s spatial and visual capabilities. developer.apple.com/videos/play/ww… #WWDC25
Our #SGP25 work studies a simple and effective way to uniformly sample implicit surfaces by casting rays. (1/9) “Uniform Sampling of Surfaces by Casting Rays” w/ @_abhishekmadan @nmwsharp @_AlecJacobson
🌸 Thanks everyone for coming! 🌸#CHI2025 #DynamicAbstractions
🌸 #CHI2025 meetup! At CHI'24, we had an informal meetup for folks into dynamic abstractions and more. Since then: UIST'24 workshop, reading group, and Discord chats. This year, we'll have small-group lunches on Mon 4/28. Add your name by Sun 04/27: shorturl.at/QZ3Cw 🧵
I am excited to join the University of Utah as an Assistant Professor this Fall. I will be hiring 2 Ph.D. students starting in Spring/Fall 2026. I will continue to work on UI optimization/generation and eye tracking-related research. I’d love suggestions for a lab name! @UUtah
I am presenting IdeaSynth in the last paper session at #CHI2025 right now! Feel free to come by G314-315 to learn about how we utilize LLM to provide literature-grounded assistance for research idea development! The talk is happening at around 10:12 AM.
🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?
Everyday proactive AI agents must understand users' long-term memory to truly assist (e.g., reminding forgotten things at the right moment). At #CHI2025, we present OmniQuery, an automatic pipeline to reconstruct users' long-term memory for agents to better understand users…
Congrats James Hou to finish his inspiring #chi2025 talk on our work Online-EYE: a multimodal approach to implicit eye tracking calibration in XR—using just a few controller point and select actions. Info: bit.ly/4jyyDtC Recap video: youtu.be/4GvfR5G2rps
The PI lab @thuhci landed in Yokohama! A big family reuniting together. Let's connect and enjoy! #CHI2025
This is happening today at 2:10 PM during the Classroom Technology Session in G302! Come check out how mixed reality supports note-taking and facilitates engagement!
Take hand-written notes in mixed reality! Our #CHI2025 paper "MaRginalia" explores how mixed reality can better support university students’ digital note-taking during in-person lectures. Students can stay engaged without needing to look down at their devices. #HCI #XR 🧵 (1/8)
👩🎓Join our HCI Lab Tour at Science Tokyo! ✨ Discover Japanese HCI research through interactive demos and posters! Place: Science Tokyo (1-min from Ookayama Station) Registration required: sites.google.com/vogue.cs.titec… #CHI2025
🤖👩🏻💻Proactive AI tools like @GitHubCopilot @cursor_ai @allhands_ai promise to assist developers by anticipating their needs and automating engineering processes—but do they truly help? We evaluated three design probes to explore the trade-offs of proactive AI programming support