Bohang Zhang @ICLR 2024
@bohang_zhang
Phd in @pku1898, focusing on topics in ML, GNN and LLM through an expressivity perspective
Thrilled to see our work honored with #ICLR2023 ๐ข๐จ๐ง๐ฆ๐ง๐๐ก๐๐๐ก๐ ๐ฃ๐๐ฃ๐๐ฅ ๐๐ช๐๐ฅ๐! Camera-ready paper: openreview.net/forum?id=r9hNvโฆ Code: github.com/lsj2408/Graphoโฆ Our paper covers: GNNs, graph transformers, expressivity, graph positional encodings, subgraph GNNs, and more!
Announcing ICLR 2023 outstanding paper award: blog.iclr.cc/2023/03/21/annโฆ. Congratulations to the authors!
#ICLR2024 Just arrived in Vienna! Don't miss our oral presentation tomorrow afternoon in room Halle A3, focusing on ๐๐ก๐ก๐ and their ๐ฒ๐ ๐ฝ๐ฟ๐ฒ๐๐๐ถ๐๐ฒ ๐ฝ๐ผ๐๐ฒ๐ฟ! Also, swing by our poster session (Poster272, Halle B). See you there! ๐

Great work!
#ICLR2024 Arrived Vienna! Happy to share our recent work ๐๐ผ๐๐ฎ๐ฟ๐ฑ๐ ๐ฒ๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ ๐ฎ๐ป๐ฑ ๐ฒ๐ณ๐ณ๐ฒ๐ฐ๐๐ถ๐๐ฒ ๐ด๐ฒ๐ผ๐บ๐ฒ๐๐ฟ๐ถ๐ฐ ๐ฑ๐ฒ๐ฒ๐ฝ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ณ๐ผ๐ฟ ๐๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ! With incredible CTL and @ask1729! May 9 10:45am-12:45am (Poster254, Halle B). Detailsโฌ๏ธ (1/n)
To reduce human bias in model architecture, we propose a simple, yet effective LLM-like visual framework, called GiT, applicable for various vision tasks (e.g., VL tasks and segmentation) only with a vanilla ViT. :) Code: github.com/Haiyang-W/GiT arxiv.org/abs/2403.09394
RNNs are popular but some limitations are known โ RNNs cannot solve some algorithmic problems that Transformers can. Our new paper "RNNs Are NOT Transformers (Yet)" explores the representation gap between constant-memory RNNs and Transformers and potential ways to bridge the gap.
Join us at our #ICLR2024 workshop: "Bridging the Gap Between Practice and Theory in Deep Learning"! Workshop website: sites.google.com/view/bgpt-iclrโฆ
๐ Excited for #ICLR2024? Join us at our vibrant workshop: "Bridging the Gap Between Practice and Theory in Deep Learning"! ๐Dive into a melting pot of groundbreaking theories ๐ and empirical discoveries ๐งช that illuminate the enigmatic world of deep learning. (1/3)