VISxAI
@VISxAI
Workshop on Visualization for AI Explainability @ieeevis #visxai #ieeevis
#VISxAI IS BACK!! 🤖📊 Submit your interactive “explorables” and “explainables” that visualize, interpret, and explain AI. #IEEEVIS 📆 Deadline: July 30, 2025 visxai.io
The PCS is open for #VISxAI 2025 at #ieeevis! We're excited to see your submissions! ➡️ Deadline to submit is July 30th.
🤖 Announcing the VISxAI Workshop! This full-day event will feature interactive explainable submissions, exceptional keynote talks, lively breakout discussions, and novel interactive demos. Submit a paper and/or interactive explanation! Deadline: July 30 visxai.io
I just rewatched @adamrpearce's talk @VISxAI and it's a real gem of reflections on the field. Highly recommend! youtube.com/watch?v=UUkftG…
Thank you for joining us at #VISxAI2024! Explore all of the amazing explainables presented today on our website visxai.io 🎉🎉🎉
Congratulations to our best submission award winners!! 🏆 “Can Large Language Models Explain Their Internal Mechanisms?” by @nadamused_, @ghandeharioun, @RyanMullins, @emilyrreif, Jimbo Wilson, @Nithum, and @iislucas 🏆 “The Illustrated AlphaFold” @ElanaPearl and @JakeSilberg
Our final lightning talk of the day is “Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design” 🧪 by Mateus Silva Aragao, Shiwen Zhu, Nhi Nguyen, Alejandro Garcia, and Susan Elizabeth Liao …lizing-interpretable-model.vercel.app
Now @narphorium will present ⌨️ “ExplainPrompt: Decoding the language of AI prompt” explainprompt.com
Our next lightning talk is 🧑🏫 “What Can a Node Learn from Its Neighbors in Graph Neural Networks?” by @AstropowerDev, Chongwei Chen, Matthew Xu, and @WangQianwenToo. visual-intelligence-umn.github.io/GNN-101/
Come watch “Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks” by Yuzhe You and @jeffjianzhao 🐼🐵 visxai-aml.vercel.app
Next, we have “A Visual Tour to Empirical Neural Network Robustness” by @ChenChe91591871, @JinbinHuang, Ethan M Remsberg, and @zcliu. 💪 cchen-vis.github.io/Narrative-Viz-…
First up, watch @ElanaPearl and @JakeSilberg present “The Illustrated AlphaFold” 🧬elanapearl.github.io/blog/2024/the-…
The Illustrated AlphaFold bit.ly/the-illustrate… Do you want to know how AlphaFold3 works? It has one of the most intimidating transformer-based architectures, so to make it approachable, we made a visual walkthrough inspired by @JayAlammar's Illustrated Transformer! 🧵 (1/7)
Our afternoon lightning talks are STARTING NOW! Don’t miss explainables on topics across computational biology 🧪 , AI robustness 💪, graph neural networks 🤖, and LLM prompting 💻.

Time for a break. ☕️ Keep the conversation going in the Discord channel! We will see you back at 10:45.
Our final lightning talk of the session is 👀 “Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron” by Anne Marx, Yumi Kim, Luca Sichi, Diego Arapovic, Javier Sanguino, @RSevastjanova, and @melassady. explainability-vit.ivia.ch
Now we have @kmurphysics and @DaniSBassett presenting “Where is the information in data?” 🔍 murphyka.github.io/information_ex…
Our next lighting talk is 🗣️ “TalkToRanker: A Conversational Interface for Ranking-based Decision-Making” by Conor Fitzpatrick, Jun Yuan, and @AeDeeGee. talktoranker.njitvis.com