Amit Parekh
@amitkparekh_
Nice post on software engineering. "Cognitive load is what matters" minds.md/zakirullin/cog… Probably the most true, least practiced viewpoint.
I will be presenting our paper, Shaking Up VLMs: Comparing Transformers 🤖 and Structured State Space Models 🐍 for Vision & Language Modeling today at #EMNLP24. If you are interested come hang out by our poster (Riverfront Hall 16:00). Details here: arxiv.org/abs/2409.05395
If you are around in #EMNLP2024, come see me talk about our work on discovering minority voices in datasets (arxiv.org/abs/2407.14259). I’ll be on the Ethics, Bias, and Fairness slot in the Ashe auditorium today, but also very open for chats throughout the conference!
Really pleased to say this has been accepted at #EMNLP2024 main
🚨 NEW PAPER ALERT 🚨 Introducing the GlobalBias dataset… We ask Claude 3, GPT 3.5, GPT 4o, and Llama 3 to produce character profiles based on given names from GlobalBias for 40 different gender-by-ethnicity groups. We find that all models displayed stereotypical outputs (1/4)
LLMs are great but they are brittle to minimal prompt perturbations (e.g., typos, indentation, ...). Q: How do we create truly multimodal foundation models? A: Do as we humans do: text as visual perception! Enter PIXAR, our work at #ACL2024NLP! arxiv.org/abs/2401.03321
We developed a framework to find robust clusters of diverse minority perspectives, without adding metadata or explicitly training for it!!! Check out the paper for details arxiv.org/abs/2407.14259
So very, very proud to share our new paper “Voices in a Crowd: Searching for Clusters of Unique Perspectives” (arXiv:2407.14259), a novel framework on how to organically find clusters of unique voices (perspectives) in datasets. 🧵 for summary, co-authors @amitkparekh_ @sinantie
🚀 Excited to share our latest paper: "Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature Distillation"! Paper: arxiv.org/abs/2406.19297 (1/5)
next year we will have AI job interviewers meeting AI applicants “this meeting could have been an API call”
yeah I'm working on the frontier of AI (googling pytorch errors that only me and one FB engineer have run into)
The Chinchilla scaling paper by Hoffmann et al. has been highly influential in the language modeling community. We tried to replicate a key part of their work and discovered discrepancies. Here's what we found. (1/9)
Semantics at @semdialmeeting "Modelling Disagreement or Modelling Perspectives?" by @NikVits, @amitkparekh_, @t_dinkar, @gavin_does_nlp, @sinantie & @verena_rieser We predict disagreement on subjective data while preserving individual perspectives! aclanthology.org/2023.semeval-1…
I cannot get over how beautiful this book is from @francoisfleuret . NeurIPS fashion accessory for the year.