Michele Ginolfi
@micginolfi
Asst. prof @UNI_FIRENZE - Astrophysics & AI. I think too much about time, entropy, and human/machine learning.
A very preliminary version of sonicWeb, a Herakoi's spin-off. Works on smartphone too, using touch sites.google.com/view/michelegi…
Nuova puntata di #umAnI con Enkk, su come funzionano gli llms e come stanno cambiando il mondo. 🤖 youtu.be/4P9LoT4Pbac?si…
The Urgency of Interpretability: Why it's crucial that we understand how AI models work darioamodei.com/post/the-urgen…
We built an AI model to simulate how a fruit fly walks, flies and behaves – in partnership with @HHMIJanelia. 🪰 Our computerized insect replicates realistic motion, and can even use its eyes to control its actions. Here’s how we developed it – and what it means for science. 🧵
youtu.be/tqgCYExGxN0?si… Puntata 11 di #umAnI 🤖🚀 Parliamo con Marc Mezard del legame profondo tra fisica ed AI e di come si migliorano a vicenda. Come "pensa" una rete neurale come un large language model? Può l'AI accelerare scoperte scientifiche? Questo e molto altro :)
#10 puntata del podcast #umani Parliamo con Vera Gheno di: - sociolinguistica, - immaginare futuri con le parole, - AI, modelli di linguaggio e impatto sulle teorie del linguaggio, - tante altre cose interessanti :) Enjoy! youtu.be/vrxqnkWx3_8?si…
The last paper of my PhD is finally out ! Introducing "Intuitive physics understanding emerges from self-supervised pretraining on natural videos" We show that without any prior, V-JEPA --a self-supervised video model-- develops an understanding of intuitive physics !
If you're interested in AI4Science, a research grant is available to work with me and @matteobriganti2 on a cool interdisciplinary project: "AI for the Design of Open-Shell Nanographenes for Quantum Computing Applications". Reach out if you're interested! unifi.it/it/intelligenz…
#9 puntata del podcast #umani Parliamo con Giovanni Covone di: - pianeti extrasolari, - astrobiologia, - ricerca della vita nello spazio, - le storie dietro le scoperte fondamentali. Enjoy! youtu.be/EPpTbkX_gj0?si…
"Move 37" is the word-of-day - it's when an AI, trained via the trial-and-error process of reinforcement learning, discovers actions that are new, surprising, and secretly brilliant even to expert humans. It is a magical, just slightly unnerving, emergent phenomenon only…
#8 puntata del podcast #umani Parliamo con Edwige Pezzulli del - rapporto tra scienza e società, - questione di genere, - visione di una scienza "plurale", - importanza della divulgazione intesa come ridistribuzione del bene comune scientifico. youtu.be/SoSzjzfGesg?si…
It’s been an amazing last couple of weeks, hope you enjoyed our end of year extravaganza as much as we did! Just some of the things we shipped: state-of-the-art image, video, and interactive world models (Imagen 3, Veo 2 & Genie 2); Gemini 2.0 Flash (a highly performant and…
If you're a CS student worried about recent AI developments, read this.
For those who didn't get it -- AlphaGo was a MCTS search process that made thousands of calls to two separate convnets in order to compute a single game move. Something like o1 pro is also, best we can tell, a search process making thousands of calls to multiple LLMs to output a…
Calling something like o1 "an LLM" is about as accurate as calling AlphaGo "a convnet"
Totally agree! Human-written text naturally limits us to a human-driven condensed representation of the real world. Pre-training on videos is the way to go: 'next-frame prediction' :)
Brilliant talk by @ilyasut, but he's wrong on one point. We are NOT running out of data. We are running out human-written text. We have more videos than we know what to do with. We just haven't solved pre-training in vision. Just go out and sense the world. Data is easy.
I’m pleased to announce our work which studies complexity phase transitions in neural networks! We track the Kolmogorov complexity of networks as they “grok”, and find a characteristic rise and fall of complexity, corresponding to memorization followed by generalization. 🧵
India is a nation with an unlimited pool of human talent, combined with the freedom to explore and develop it. The future is bright not only in chess. The summit has been reached and now the goal must be to raise it even higher for the next ascent. Congratulations again. Upward!
Introducing Willow, our new state-of-the-art quantum computing chip with a breakthrough that can reduce errors exponentially as we scale up using more qubits, cracking a 30-year challenge in the field. In benchmark tests, Willow solved a standard computation in <5 mins that would…
"Modelling Brain Function" by Prof Amit (from 1989 🤯) is such a great visionary book, filled with pioneering ideas drawn from physics that became the backbone of modern neural nets theory. If he were alive today, he’d probably have shared this year’s Nobel with Hinton & Hopfield
