Weijie Xu
@weijiexu_97
Ph.D. student in Language Science at UC Irvine Psycholinguist🤓
Glad to share that our paper (with @rljfutrell) is now on JML. In this paper, we proposed strategic memory allocation as a mechanism to handle less predictable but more informative linguistic units. Check out this link for more details: sciencedirect.com/science/articl…

Have reading time corpora been leaked into LM pre-training corpora? Should you be cautious about using pre-trained LM surprisal as a consequence? We identify the longest overlapping token sequences and conclude the leakage is mostly not severe. In Findings of #ACL2025 #ACL2025NLP
Using electrical recordings taken from the surface of the brain, researchers decode what words neurosurgical patients are saying and show that the brain plans words in a different order than they are ultimately spoken. @adumbmoron @adeenflinker nature.com/articles/s4427…
I’m honored to share that I’ve been awarded the outstanding scholarship from UCI school of social science. Huge thanks to my advisor Richard, my lovely friends, and the language science department for their support!
Congrats to Weijie Xu, fourth-year @UCIrvine language science grad student & recipient of the @ucisocsci Outstanding Scholarship award! The faculty-nominated award recognizes an outstanding grad student for high intellectual scholarship & achievement. socsci.uci.edu/newsevents/new…
Speech errors are monitored and corrected by both speakers and comprehenders, but to what extent are these correction processes asymmetrical or interactive? Check out this new paper with co-first @jiaxuan_l & @rljfutrell to appear at @CMCL_workshop: arxiv.org/pdf/2503.16745… [1/7]
Language Models learn a lot about language, much more than we expected, without much built-in structure. This matters for linguistics and opens up enormous opportunities. So should we just throw out linguistics? No! Quite the opposite: we need theory and structure.
Question everything | Shiva Upadhye, @UCIrvine language science graduate student, looks outside the box in the language science department More on her Anteater journey: socsci.uci.edu/newsevents/new…
Work with @UChicagoLangLab on processing appositive (ARCs) vs. restrictive relative clauses (RRCs) is out: doi.org/10.1111/cogs.1… ARCs typically contain side-commentary info; does a distractor in an ARC lead to an absence of agreement attraction effect? (1/8)
Want to use computational tools to figure out how human language works? But not ready for a PhD? UC Irvine's new post-bacc program in computational language science bridges the gap. Fall 2024 applications now open! socsci.uci.edu/newsevents/new…
What does it mean for language comprehension to be “good-enough”? With @rljfutrell, we present a computational formalization of model of shallow and deep processing using rate-distortion theory in our new #CogSci2024 paper: arxiv.org/abs/2405.08223 (1/n)
Registration for SCiL 2024 @UCIrvine is now open! To register, apply for travel awards, and sign up for mentorship opportunities, check out: sites.uci.edu/scil2024/regis… (1/2)
We'll be at #HSP2022, with three presentation from the lab! @weijiexu_97, @jiaxuan_l, & Ming Xiang's poster "Syntactic adaptation to short-term cue-based distributional regularities" is on Thursday afternoon...