Henry Bigelow
@hrbigelow
Computational Biologist, Machine Learning Engineer and Researcher @AsteraInstitute. Formerly at Amgen and Broad Institute
Another tidbit from the upcoming Variational Book. This property of the composability of Gaussian noise is a key part of what makes Diffusion models work efficiently.
Is the art of transforming text into images or videos something that sparks your curiosity? @DavidDuvenaud @DrJimFan @WenhuChen @_tim_brooks @kchonycthe @sleepinyourhat essence of diffusion model construction by understanding step-wise deformation
My quick thoughts on @peterboghossian’s recent livestream with @drbriankeating on “Trust Science, Not Scientists”: I appreciate Brian’s (indirect) praise of my podcast and Peter’s genuine curiosity to understand physics. Hoping they'll take to heart some shortcomings I saw: 1/
Vector-quantization is taking over! @BytedanceTalk @keyutian @pess_r @robrombach @OriolVinyalsML @koraykv The details of VQ methods are highlighted, including the VAR @NeurIPSConf paper of the year. check out the following PDF drive.google.com/file/d/1XnxS0b…
Just took this very brief (one minute) personality test. Reading the analysis was strange and also spooky - like watching someone solve a rubiks cube of my emotional make-up. Take the Eristics Test eristicstest.com
This podcast is well worth checking out, even if, *especially if* advanced mathematics terrifies you a little bit. (It does for me) Seeing expert mathematicians reason through their intuitions in real time is an encouraging experience.
Consider having your pi today at The Cartesian Cafe, my podcast that's been ranked top 5 for math in the US. Watch in-depth whiteboard sessions with mathematicians, physicists, and AI experts - from Fields Medal winning work to quantum computing. youtube.com/playlist?list=… #math
One thing that wasn't apparent to me at first - Diffusion models are VAEs with a special structure (fixed encoder, learnable decoder) - so if you're interested in Diffusion models, this book may be of interest.
Have time for diffusion? @jaschasd @sama @gdb @omarsar0 @Thom_Wolf @tunguz @DataJunkie We give a brief rundown of what sets these methods apart.
For those interested in the mysterious and challenging topic of variational inference, central to modern machine learning. I am looking forward to reading when it is released.
what are some of the best ways you learn?? @kaifulee @AndrewYNg @drfeifei @KirkDBorne @rasbt @kdnuggets @Datasciencectrl @AssemblyAI @TeachTheMachine The Variational Book dives into the details. Let's quickly compare the latent space between NFs and VAE #AI #GenerativeAI
I implemented and trained the transformer proceedings.neurips.cc/paper_files/pa… #AIAYN in Jax/Haiku on Cloud TPU. Includes beam search, incremental inference with kv-cache, packed sentence pair dataset, Blog entry here, comments welcome. mlcrumbs.com/transformer-fr…


Fans of epistemology in machine learning: my recent #lesswrong post grappling with notions of truth, causality and interpretability of representations: lesswrong.com/posts/iYevftcf…
"Gas the jews" chants in Sydney, Australia. Similar demonstrations are seen all over the world including in NYC, Chicago, Atlanta, Paris, and more. Universities and institutions everywhere should speak out against such hate. This transcends politics.
Video: A crowd at the steps of the Sydney Opera House chants "gas the Jews" and "f*ck the Jews" on October 9.
This looks really nice
In one day - one day! - of going open source, the Typst typesetting system passed 5,000 stars on Github. If you ever needed evidence that there is a real hunger for a TeX replacement, this is it.
An enthralling short discussion of how cosmologists use multiple scale density fluctuations across the Universe to deduce the nature of its origins.
Want to win a Nobel prize? Eager to hear a scientist’s story of how that didn’t happen? Ethan Siegel (@startswithabang) and I discuss the science and drama behind the Icarian journey of Losing the Nobel Prize by @DrBrianKeating. A brief overview (1/n): youtube.com/watch?v=iJ-vra…
ML researchers: just listened to this podcast episode from start to finish, all of it engaging, presenting clear mathematical foundations of machine learning and how to analyze larger and larger models and their training trajectories. A beautiful unification of ideas at the end.
Have you been awaiting a mathematically rigorous theory of large neural networks? Then join me and @TheGregYang on his incredible Tensor Programs work, which both realizes such a theory and provides concrete experimental guidance to machine learning (1/n): youtu.be/1aXOXHA7Jcw
Python Customized Tracebacks: see tensor shapes, dtypes in a traceback without a debugger pypi.org/project/pyctb/ #TensorFlow #PyTorch
If @PyTorch was printing the sizes, dtype and devices of all the tensors involved in an operation that failed, we would be getting AGI ten years earlier.
Excited to see many awesome community members in person at #PyTorchConference tomorrow! Some major announcements are coming too…
Tired of overblown quantum hype? Ready to learn the truth about quantum computing? Then pull up a chair at The Cartesian Cafe to get a masterclass from Scott Aaronson and some quantum straight talk: youtu.be/qs0D9sdbKPU Trailer video (1/n):
Theories of Everything: You know of my refutation of @EricRWeinstein + @DrBrianKeating's Geometric Unity. Others know of Scott Aaronson's refutation of @stephen_wolfram. At long last, Scott and I sit down for a nice conversation: youtu.be/wd-0COLM8oc The trailer clip: (1/n)
This tutorial motivates the Kernel Method as the optimal model within a family of models. You can manipulate the functions and their norm, in solution space and feature space. Focuses on essential kernel idea using the simplest method, kernel regression, as worked example.