Ilaria Manco
@Ilaria__Manco
Research scientist @GoogleDeepMind working on music • DJ 🎶
We’ve just released Magenta RealTime, an open-weights live music model that lets you craft sounds in real time by exploring the latent space through text and audio! 🤗 Model: huggingface.co/google/magenta…… 🧑💻Code: github.com/magenta/magent… 📝Blog post: magenta.withgoogle.com/magenta-realti…
Excited to announce 🎵Magenta RealTime, the first open weights music generation model capable of real-time audio generation with real-time control. 👋 **Try Magenta RT on Colab TPUs**: colab.research.google.com/github/magenta… 👀 Blog post: g.co/magenta/rt 🧵 below
✨ made a realtime bhangra (punjabi folk music) × techno/breakcore fusion using @googlemagenta 's lyria vst plugin in ableton. the mix constantly shifts live between genres, visualized as a sliding gradient bar 📶
We’ve released The Infinite Crate, a plugin to play with Lyria RealTime directly in the DAW (or as a standalone app)! Super happy to see this out in the world and have real-time generation better integrated with music production tools 🎶 Download here: g.co/magenta/infini…
Jam Along 🎸🥁: Create dynamic backing tracks that are different every time, varying and evolving as you play on top.
🔥Happy to announce that the AI for Music Workshop is coming to #NeurIPS2025! We have an amazing lineup of speakers! We call for papers & demos (due on August 22)! See you in San Diego!🏖️ @chrisdonahuey @Ilaria__Manco @zawazaw @huangcza @McAuleyLabUCSD @zacknovack @NeurIPSConf
Audio prompting now available in Magenta RT! colab.sandbox.google.com/github/magenta…
Show don't tell... Magenta RealTime now supports audio prompting in addition to text prompting, so you can now use clips of audio as latent anchors to steer generation. youtu.be/vHIf2UKXmp4?si…
Show don't tell... Magenta RealTime now supports audio prompting in addition to text prompting, so you can now use clips of audio as latent anchors to steer generation. youtu.be/vHIf2UKXmp4?si…
ppl claiming art is on the decline are usually also the ones failing to take note of new forms of art.
Thesis on the decline of art: authors, musicians and filmmakers are, like us, always on their phones, and thus no longer forced to creatively contend with the silence and boredom that used to be an inescapable reality of everyday life
🎶📢 Excited to announce the 1st Workshop on LLMs for Music & Audio (LLM4Music) at #ISMIR2025! 📍 KAIST, Daejeon, Korea 🗓️ Sept 26, 2025 🧠 Exploring LLMs for music, audio, & multimodal creativity 📝 Submit by Aug 10 🔗 Info: m-a-p.ai/LLM4Music/ #AI #MusicTech #LLM4MA
On the occasion of returning to Magenta's roots at @sonarplusd, we're dusting off the blog to share news and insights about what we're working on at @GoogleDeepMind on the Lyria Team. g.co/magenta/lyria-… Our latest post is about the Lyria RealTime API, providing access to…
Pleasantly surprised with how easy/fun it is to "vibe code" new musical experiences in AI Studio. I'm not much of a JS guy, but had Gemini help me make a "Kaoss Pad" emulator using the Lyria RealTime API to explore musical latent spaces in realtime.
So excited that Lyria RealTime is now available as an API! Try it out and fork our examples here: aistudio.google.com/app/apps/bundl…
A lot of big announcements today, but one nice one is the real-time music model many Magenta folks have been working on now has a name and a landing page. Introducing Lyria RealTime, the live interactive member of the Lyria family of models: deepmind.google/technologies/l…
We are looking for audio and speech generation people, in Zurich, Paris or London to join our team at Google Deepmind. We build cutting-edge speech, music and audio (also audio-visual) generation capabilities. Reach out to Jason or me if interested. Retweets very appreciated !
Our incredible team built many models announced here, including image, voice, music and video generation! And: I'm moving to London this summer, and I'm hiring for research scientist and engineering roles! Our focus is on speech & music in Zurich, Paris & London. DM/email me.
We are hiring Applied Interpretability researchers on the GDM Mech Interp Team!🧵 If interpretability is ever going to be useful, we need it to be applied at the frontier. Come work with @NeelNanda5, the @GoogleDeepMind AGI Safety team, and me: apply by 28th February as a…
A sister team to ours at Google DeepMind is looking for student researchers this summer. Please reach out if you are a PhD student working on media generation (diffusion models), or if you are a professor with students to recommend! 😀
Deadline extended to 30 January 📣2025.ijcnn.org/authors/call-f…
📢 #IJCNN is seeking research papers, artistic demos, and exhibitions for the Special Track on Human-AI Interaction in Creative Arts & Sciences 🎨🤖 🗓️ Submission Deadlines: Paper Proposal: Jan 15, 2025 Demo Proposal: Mar 20, 2025 🔗 Learn more: loom.ly/YuwTgDc