Nicholas Roberts
@nick11roberts
Ph.D. student @WisconsinCS. Working on foundation models and breaking past scaling laws. Previously at CMU @mldcmu, UCSD @ucsd_cse, FCC @fresnocity.
📉📉NEW SCALING LAW PHENOMENON 📉📉 We find that knowledge and reasoning exhibit different scaling behaviors! Super excited to finally tell you all about our paper on the compute optimal scaling of skills: arxiv.org/pdf/2503.10061 [1/n]
![nick11roberts's tweet image. 📉📉NEW SCALING LAW PHENOMENON 📉📉
We find that knowledge and reasoning exhibit different scaling behaviors!
Super excited to finally tell you all about our paper on the compute optimal scaling of skills:
arxiv.org/pdf/2503.10061
[1/n]](https://pbs.twimg.com/media/GmhN1K0aUAA_yut.jpg)
LLM judges are powerful for automated evaluation but expensive and biased.📣 Meet PAJAMA, a new framework that distills LLM judging logic into a compact, executable form (a new representation), cutting costs from thousands to just cents.🚀 We'll present at ICML PRAL on Friday!
Next up this morning at #ICML2025, we will be presenting our work on pseudolabeling-based semi-supervised learning (SSL). East Exhibition Hall A&B # E-1304, 11 am to 1:30 pm Paper: openreview.net/pdf?id=w4c5bLk… Pseudolabeling-based SSL relies on the model’s confidence scores and…
Join us today in the morning poster session at #ICML2025. We will talk about some neat ways for reducing uncertainty and improving LLM accuracy at test-time on multi-choice tasks (e.g., tool selection) using conformal prediction and an additional inference round. 📍 East…
Excited to be at ICML’25!!! I'll present papers on improving LLM inference and evaluation and pseudolabeling-based semi-supervised learning. Come and say hi during these sessions, or chat anytime during the week! [C1]. Prune 'n Predict: Optimizing LLM Decision-making with…
Heading to #ICML! I’ll be representing SprocketLab at @UWMadison and @SnorkelAI. Reach out if you want to chat about data-centric AI, data development, agents, and foundation models.