Ryuhaerang Choi @CHI2025
@haerang_hci
๐ฉ๐ปโ๐Ph.D. student@KAIST EE @hcikaist #HCI #mentalWellbeing | prev @BellLabs
Iโll present our #CHI2025 paper, โPrivate Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recoveryโ in LLM for Health session. ๐ค9:36 AM Wed 30. Apr ๐Annex Hall F206 ๐dl.acm.org/doi/pdf/10.114โฆ Grateful to Taehan, @subin_hci, Prof. Jennifer G Kim, and @wewantsj

Great #CHI2024 talk by @haerang_hci on helping users to control exposure to food content on social media. We are presenting our @RHEDC_Project work on remote healthcare for eating disorders (video stream 4pm tomorrow). Hopefully catch you for a chat to discuss shared interests ๐
What role can AI play in unlocking musical creativity? At #CHI2025 (๐ฅ Best Paper Award ๐ฅ), we present Amuse: a songwriting tool for musicians that turns photos, sounds, and stories into chords ๐ถ ๐ arxiv.org/abs/2412.18940 ๐ง yewon-kim.com/amuse/
When learning new skills, people often watch *multiple* how-to videos to understand different approaches and details. In our #IUI2025 paper, we introduce VideoMix โ a system that uses vision-language models to aggregate videos into one organized, easy-to-digest experience.
๐Join me for my #NeurIPS2024 poster presentation on 11th Dec! "DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators" ๐ When: Wed 11 Dec 4:30 pm- 7:30 pm ๐ Where: East Exhibit Hall A-C 1405 #NeurIPS #NeurIPS24 #MachineLearning #AI #AIAccelerator
๐ (FL)ยฒ enables 99.9% label-free federated learning! I am on my way to #NeurIPS2024 ๐ Catch us at #NeurIPS - Date: Wednesday, Dec 11 - Time: 11 a.m. PST โ 2 p.m. PST - Location: East Exhibit Hall A-C #3603 Paper: arxiv.org/abs/2410.23227
SoundCollage explores the new potential for finding hidden classes in existing audio datasets, enabling efficient model training for diverse downstream tasks without new data collection. A great work done by @haerang_hci with @spdimitris, @wewantsj, @raswak and @malekz4deh.
At #UIST2024, I'm presenting two projects: ๐ถใฐ๏ธSonoHaptics and ๐Auptimize! We tackle perceptual challenges in XR interactions (gaze-based object selection & audio localization) through multisensory feedback - without relying on visual displays. ๐งต
[#KAIST ์ ์ฐํ๋ถ ์ฐ๊ตฌ] ์์คํ ์ผ๋ก ์์ฑํ ์์ฒญ์ ํ๋ฅด์๋์ ํ์ง์ ํ๊ฐํ๊ธฐ ์ํ ํ๊ฐ์๋ฅผ ๋ชจ์งํฉ๋๋ค. (๋น๋๋ฉด 1.5์๊ฐ, 4.5๋ง์) ์์ธํ ์๋ด์ ์คํ ์ฐธ์ฌ๋ ์๋ ๋งํฌ๋ฅผ ํ์ธํด์ฃผ์ธ์. ์ค๋ฌธ ๋งํฌ: forms.gle/U7PdbTy9JzYngeโฆ ์ค์ ์กฐ๊ฑด: 1๋ ์ด์ ์ฑ๋์ ์ด์ํ ์ ํ๋ธ ํฌ๋ฆฌ์์ดํฐ
[#KAIST ์ ์ฐํ๋ถ ์ฐ๊ตฌ ์ฐธ์ฌ์ ๋ชจ์ง] ํตํฉ๊ฒ์ ๋ฐ ๋ํํ ๊ฒ์ ํ์ง ํ๊ฐ ์คํ AI ๊ธฐ๋ฐ ์์ฑํ ๊ฒ์ ์๋น์ค๋ฅผ ์ง์ ์ฌ์ฉํด๋ณด์๊ณ , ๊ฒ์ ๊ฒฐ๊ณผ์ ํ์ง ๋ฐ ๊ฒฝํ์ ํ๊ฐํด์ฃผ์ค ์ฐธ์ฌ์๋ฅผ ๋ชจ์งํฉ๋๋ค! ์์ธํ ์ฌํญ์ ์๋ ๋งํฌ๋ฅผ ํตํด ํ์ธํด์ฃผ์ธ์ ๐ โ ์คํ ์ฐธ์ฌ ๋งํฌ: bit.ly/cue-sge
์ ํฌ ์ฐ๊ตฌ์ค์์ ํ๋ถ์ ์ธํด ๋ถ๋ค์ด [์ฐ๊ตฌ์๋ค์ด ๊ฒช๋ ์ด๋ ค์๊ณผ '์๊ฐ์ ๊ตฌ์กฐํ' ๊ด๋ จ ์ค๋ฌธ์กฐ์ฌ] ๋ฅผ ์งํํ๊ณ ์์ต๋๋ค. ์ฐ๊ตฌ ๊ฒฝํ์ด ์์ผ์ ๋ถ๋ค์ ๋ง์ ์ฐธ์ฌ ๋ถํ๋๋ฆฝ๋๋ค! (์์ ์์ ์๊ฐ 5๋ถ ์ด๋ด) ์ค๋ฌธ ๋งํฌ: moaform.com/q/DPlphF ์์ธ ๋ด์ฉ (์ค๋ ๋ ์ฐธ๊ณ ): ๐
#CHI2024 MineXR: Mining Personalized XR Interfaces MineXR enables researchers to collect & analyze personalized XR user interface data. We also contribute a dataset of XR widgets & layouts, and design guidelines for future XR UIs. Paper, code, dataset: bit.ly/minexr
@haerang_hci is on her way to ACM CHI to present the best paper honorable mention award winning paper, FoodCensor! We developed FoodCensor that monitors and hides passively exposed food content on smartphones (e.g., youtube app) and personal computers (via Chrome extension).