Shichang (Ray) Zhang
@ShichangZhang
Postdoc @Harvard | Research on XAI | Ex-@UCLA, Ex-@Stanford, Ex-@Berkeley
Why is interpretability the key to dominance in AI? Not winning the scaling race, or banning China. Our answer to OSTP/NSF, w/ Goodfire's @banburismus_ Transluce's @cogconfluence MIT's @dhadfieldmenell resilience.baulab.info/docs/AI_Action… Here's why:🧵 ↘️
Super excited to share our latest preprint that unifies multiple areas within explainable AI that have been evolving somewhat independently: 1. Feature Attribution 2. Data Attribution 3. Model Component Attribution (aka Mechanistic Interpretability) arxiv.org/abs/2501.18887…
It's happening today!!! We are also hosting a networking event from 5:00 - 5:30 PM. You don't want to miss the opportunity to network with this group and discuss the foundations of AI regulations for the coming years.
Join us at the #RegulatableML workshop at #NeurIPS2024 to learn about AI regulations and how to operationalize them in practice. 🗓️ Date: Dec 15, 2024 (East Meeting Room 13) 🕓 Time: 8:15 am - 5:30 pm 🔗 Details: regulatableml.github.io We have an exciting schedule: ⭐️ Six…
Join us at the #RegulatableML workshop at #NeurIPS2024 to learn about AI regulations and how to operationalize them in practice. 🗓️ Date: Dec 15, 2024 (East Meeting Room 13) 🕓 Time: 8:15 am - 5:30 pm 🔗 Details: regulatableml.github.io We have an exciting schedule: ⭐️ Six…
1 more day until the abstract deadline for our workshop at #NeurIPS2024 on AI, policy, and regulations: shorturl.at/79ow2. The full paper deadline is a few days after that. We look forward to receiving your submissions and seeing you at the workshop in Vancouver.
Thrilled to receive the KDD Dissertation Award Runner-Up, for my PhD works on Neural-Symbolic Reasoning. Sincerely thanks to my PhD advisors @YizhouSun and @kaiwei_chang, my letter supporters @yisongyue and @jhamrick. Thanks to the award committee @kdd_news for such honor.
How to control LLM behavior with LLM-as-a-judge? Check our paper: "Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller" Website: llm-self-control.github.io Paper: arxiv.org/abs/2406.02721 Code: github.com/HenryCai11/LLM…
Can LLMs play a hidden-identity board game "Renaissance Avalon"? Check out: arxiv.org/abs/2310.05036 Code: github.com/jonathanmli/Av… In this work, we built a game engine AvalonBench, consisting of several fixed rule baselines. We found ChatGPT 3.5 still cannot beat simple rules.
🧸We introduce SCIBENCH, a challenging college-level scientific dataset designed to evaluate the reasoning abilities of current LLMs (#gpt4, #chatgpt). 🐻We find that no current prompting methods or external tools improves all capabilities. Github: github.com/mandyyyyii/sci…
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models paper page: huggingface.co/papers/2307.10… Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these…
Our survey on #GraphNeuralNetwork acceleration is now on arXiv: arxiv.org/pdf/2306.14052…. We have consolidated #GNN acceleration algorithms, systems, and customized hardware. Any comments or questions are highly appreciated! @YizhouSun @acbuller @HZJ_jingjing @eiclab @UCLA_DM
If you are at @TheWebConf, don't forget to check our latest work on Taxonomy Expansion by Song Jiang @SongJia23015147 et al. (Tues May 02 2:40 PM – 3:00 PM room #104) and #GNN Explanation by Shichang Zhang @ShichangZhang et al. (Wed May 03 10:20 AM – 10:30 AM room #104)
A wonderful week with @xbresson and @PetarV_93 visiting UCLA Data Mining lab. Inspiring talks on ViT/MLP-Mixer on Graphs and Algorithmic Reasoning. Interesting discussion on all perspectives of GNNs. Thank you for sharing your time and insights with us!
I will present our paper on #GNN explainability later this morning. Our method builds up on a better game theory value than the Shapley value. You are welcome to our poster #338 in Hall J from 11 am to 1 pm CST to chat.

UCLA Data Mining Lab (@YizhouSun) will present three papers on topics including GNN explainability, graph imputation and fairness, and multi-task and OOD generalization at #NeurIPS2022 Please stop by our posters on Thu Dec 1st and talk to @ShichangZhang @arjunsubgraph @acbuller
Excited to receive the #SoCalNLP Best Paper Award for our paper "Empowering Language Models with Knowledge Graph Reasoning for Question Answering". The paper link is: arxiv.org/abs/2211.08380 Thanks to the organizers and all the great collaborators!
Our @MegagonLabs Best Paper Award winner was "Empowering Language Models with Knowledge Graph Reasoning for Question Answering" by Ziniu Hu et al from UCLA! Paper link: arxiv.org/abs/2211.08380 Thank you to award sponsor @MegagonLabs for supporting our event! (4/4)