Liam
@liamhparker
PhD student in theoretical physics @UCBerkeley supported by NSF GRFP and Researcher @PolymathicAI. Previously @Princeton.
🧵 Could this be the ImageNet moment for scientific AI? Today with @PolymathicAI and others we're releasing two massive datasets that span dozens of fields - from bacterial growth to supernova! We want this to enable multi-disciplinary foundation model research.
Our internship program at Polymathic is open for opportunities from now through fall 2025! I believe our program provides an opportunity to work alongside some of the best researchers and engineering experts in the world — exploring the unknown of building foundation models for…
Welcome to the world, OpenAI o1 openai.com/index/learning…
You heard all about AI accelerating simulations (maybe from me?), but do you know... How can AI tell you what is in the Universe? Our new series of work from #SimBIG team led by @changhoon_hahn published recently by @NatureAstronomy did just that! Interesting things we did:…
Excited to announce that our latest #SimBIG collaboration research has just been published in @NatureAstronomy 🔭✨! #Astronomy #Cosmology #NatureAstronomy
Excited to announce that our latest #SimBIG collaboration research has just been published in @NatureAstronomy 🔭✨! #Astronomy #Cosmology #NatureAstronomy
By extracting non-Gaussian cosmological information on galaxy clustering at non-linear scales, a framework for cosmic inference (SimBIG) provides precise constraints for testing cosmological models. @ChanghoonHahn @cosmo_shirley @DavidSpergel et al.: nature.com/articles/s4155…
Check out our recent work on simulation-based inference in galaxy clustering!
By extracting non-Gaussian cosmological information on galaxy clustering at non-linear scales, a framework for cosmic inference (SimBIG) provides precise constraints for testing cosmological models. @ChanghoonHahn @cosmo_shirley @DavidSpergel et al.: nature.com/articles/s4155…
SOTA models often use bidirectional transformers for non-NLP tasks but did you know causal transformers can outperform them even on tasks without a causal structure? Our recent work shows causal transformers learn circuits bidirectional ones can't, leading to better performance!
Some exciting @PolymathicAI news... We're expanding!! New Research Software Engineer positions opening in Cambridge UK, NYC, and remote. Come build generalist foundation models for science with us! Please indicate your interest on the form here: docs.google.com/forms/d/e/1FAI…
🎉 Excited to introduce Gibbs Diffusion (GDiff), a new Bayesian blind denoising method with applications in image denoising and cosmology! #ICML2024 📜 arxiv.org/abs/2402.19455 🔗 github.com/rubenohana/gib… By @HeurtelDepeiges @charlesm993 @oharub @BrunoRegaldo
We have been using neural posterior/simulation-based inference(SBI) for scientific computing. There was one hole: you run 10 networks for the same task and you obtain 10 different inference. Our paper arxiv.org/abs/2310.17009 attempts to aggregate non-mixing runs of SBI.