Scott H. Hawley
@drscotthawley
Professor of Physics & Senior Data Fellow @BelmontUniv, teaching Audio Engineers. Head of Research @Hyperstate_AI. Mostly: ML for music producers.
🎉 Elated that this tutorial has been selected as "best blog post" for ICLR 2025! iclr-blogposts.github.io/2025/about/ See you in Singapore! Stop by the Poster session: iclr.cc/virtual/2025/p…
New tutorial! I spent 3 weeks realizing flow-matching/rectified flows can be viewed in a simple way that end-runs the usual pages of math: "Basic physics provides a 'straight, fast' way to get up to speed with flow-based generative models" Colab included! drscotthawley.github.io/blog/posts/Flo…
Glad to see these stories being told. Thanks @CTmagazine @emlybelz h/t @DerekSchuurman christianitytoday.com/2025/07/meet-c…
"Audio Signal Processing in the Artificial Intelligence Era: Challenges and Directions" A good review paper on the status of ML for various audio tasks, and the major open problems. aes2.org/publications/e…
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
Feature Request: Phone speaker arrays that beam the sound directly toward the user instead of all over the cafe/airport/train/etc. Alternatively, mandatory headphone laws.
AI-generated music has improved a lot over the years. instagram.com/reel/DK-5N21M4…
Londoners, go check out this real-life exhibition! I learned of @annibale_sic's work in 2022, via prompts I saw others using with Stable Diffusion, and became a fan of the original human artist's visions. ;-)
Curates In Focus: Annibale Siconolfi 🏙️ Where architecture meets imagination. @annibale_sic crafts intricate digital cityscapes that fuse futuristic structures with echoes of ancient worlds. With a background in architecture and a passion for sci-fi, his work transforms urban…
Diffusion models have analytical solutions, but they involve sums over the entire training set, and they don't generalise at all. They are mainly useful to help us understand how practical diffusion models generalise. Nice blog + code by Raymond Fan: rfangit.github.io/blog/2025/opti…
The general chair is in the house tonight!! #IJCNN2025 @drscotthawley @dhan90001 @riccardofosco @elelopess
Who should be credited for this audio generation? In our recent work, we explore the application of unlearning methods to establish training data attribution in realistic music generative models.
"Large-Scale Training Data Attribution for Music Generative Models via Unlearning" We explore how machine unlearning can be used for Training Data Attribution (TDA) in large-scale text-to-music diffusion models. 📜 Paper: arxiv.org/abs/2506.18312 @SonyAI_global
Position Paper: Let people present from their own laptops.