ROHAN WADHAWAN
@rohanwadhawan7
ML Science @AbridgeAI | NLP Research @UCLA | MS CS @UCLA | Teaching Associate & AI Tutor @UCLA | Multimodal AI | LLM | RLHF
Thrilled to announce that ConTextual is headed to #ICML2024! 🎉Huge thanks to our stellar team @hbXNov @kaiwei_chang @VioletNPeng 😍 Calling vision-language community to test their multimodal chatbots on our high-quality benchmark. New insights & leaderboard updates coming soon!
🥳ConTextual is accepted to #ICML2024! Consider submitting your large multimodal model responses to our leaderboard. We hand-wrote 500+ instruction and responses for this dataset ✍️ The gold responses are not public, so it is a good benchmark for high-quality eval😍!
Cross-lingual transfer can be as easy as swapping model layers between LLMs! 🔀 Our model merging method can compose math and language skills by swapping top&bottom layers from a SFT’d target language expert into a math expert without retraining arxiv.org/pdf/2410.01335 🧵: [1/3]
We won Best in KLAS today for ambient. All thanks to our partners, and all our people.
🏆 𝗔𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗺𝗲𝗻𝘁: Abridge wins Best in KLAS for Ambient AI segment. 𝗔𝗯𝗿𝗶𝗱𝗴𝗲’𝘀 𝗡𝗼. 𝟭 𝗿𝗮𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗕𝗲𝘀𝘁 𝗶𝗻 @KLASResearch 𝟮𝟬𝟮𝟱 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 & 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗿𝗲𝗽𝗼𝗿𝘁 is truly special because the results are based entirely on…
Absolutely thrilled to partner with Hopkins 🙏
🎉 𝗔𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗺𝗲𝗻𝘁: Johns @HopkinsMedicine has finalized an agreement with Abridge to deploy its AI platform for clinical documentation across the enterprise—to all specialties and care settings. Abridge’s AI platform will be available to 𝟲,𝟳𝟬𝟬 𝗰𝗹𝗶𝗻𝗶𝗰𝗶𝗮𝗻𝘀,…
Can VLMs improve 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀💪? We propose🔥𝗩𝗜𝗦𝗖𝗢, a benchmark to evaluate VLMs’ 𝗰𝗿𝗶𝘁𝗶𝗾𝘂𝗲 and 𝗰𝗼𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻 capabilities, towards the higher goal of VLMs autonomous self-improvement. 🌐Project: visco-benchmark.github.io 📄Paper: arxiv.org/abs/2412.02172
🌍Are LLMs aware of cultural and legal safety in today’s geo-diverse world? 🚀Introducing SafeWorld, our #NeurIPS2024 paper and benchmark assessing LLMs’ understanding of geo-diverse safety, based on cultural norms and policies across 50 countries and 493 regions/races. ⚖️We…
Congratulations Professor @VioletNPeng , @FabriceYHC, @yufei_t , @AlexanderSpangh and the team for this great achievement !🎉
Thrilled, grateful, and humbled to have won 3 outstanding paper awards at #EMNLP2024!!! Not even in my wildest dreams. Immense thanks to my amazing students and collaborators! All three works are on evaluating LLM’ abilities in creative narrative generation. 🧵👇
💥Check out our latest work BRIEF on multi-hop compressor! BRIEF is lightweight, T5-based that performs query-aware multi-hop reasoning by compressing retrieved documents into highly dense textual summaries to integrate into in-context learning.
EchoPrime is on Hugging Face daily papers 🚀 huggingface.co/papers/2410.09…
🚀Introducing MRAG-Bench: How do Large Vision-Language Models utilize vision-centric multimodal knowledge? 🤔Previous multimodal knowledge QA benchmarks can mainly be solved by retrieving text knowledge.💥We focus on scenarios where retrieving knowledge from image corpus is more…
1. Healthcare systems need enterprise-grade AI solutions they can trust. Read our latest whitepaper to learn more about what enterprise-grade AI for healthcare looks like: abridge.com/ai/science-ai-…
🎉Happy to present 3 papers in #DMLR Workshop @ICML2024 (remotely)!! Say hi to my collaborators😃 ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models 📜 arxiv.org/abs/2308.158122 w/ @rohanwadhawan7 @kaiwei_chang @VioletNPeng (1/3)
I'm attending ICML, and will be presenting 3 papers on be half of my group #PlusLAB (yes, students' visa issues)... Here's some information about the papers we'll be presenting from #PlusLab. Come talk to me and my students! 🧵(1/5)
🚀 Exciting News! 🚀 Join @baharanm and me for a 2-hour tutorial on Data-Efficient Learning! Learn the principles behind data curation: the secret sauce powering today’s AI revolution! ⚡️ See you at 1pm on Monday CEST in Hall A8! 🙌 🔗 More details: sjoshi804.github.io/data-efficient…
I'll be giving a 2-hour tutorial on data-efficient learning with my PhD student @sjoshi804 on Monday July 22 at #ICML2024. Join us to learn more about this cool topic! ➡️ We can learn better from better data! ⬅️🙌🌱
Checkout the performance of the newly released Gemini-1.5 models on ConTextual!
🚨 BREAKING: @GoogleDeepMind's Gemini-1.5 and @OpenAI's GPT-4o improve over their older versions on ConTextual, but still lag behind humans! It tests joint reasoning over text and visual content, which makes it harder than TextVQA-style datasets✍️ More: con-textual.github.io