Kenneth Stanley
@kenneth0stanley
SVP of Open-Endedness @LilaSciences. Prev: Maven CEO, Lead@OpenAI, Uber AI, prof@UCF. NEAT,HyperNEAT,novelty search, POET. Book:Why Greatness Cannot Be Planned
Important announcement (with job opportunities!): I’m thrilled to share that I just joined @LilaSciences as SVP of Open-Endedness! Lila is a new name in the AI space, but one you will be hearing a lot from. Their unique mission to pursue Scientific Superintelligence could not…
Announcing a research preview and a new paper! `martian/code` uses MI research to out-perform existing models on codegen: withmartian.com/code (openai-api compatible) One focus: text2sql. A peek behind the research preview: TinySQL, a paper on how models generate SQL. 🧵👇
"Why Greatness Cannot Be Planned" Japanese edition! @kenneth0stanley @joelbot3000
Kenneth Stanley & Joel Lehmanによる名著『Why Greatness Cannot Be Planned』の日本語版がBNN社より刊行されました! 『目標という幻想:未知なる成果をもたらす、〈オープンエンド〉なアプローチ』 監修:岡瑞起、翻訳:牧尾晴喜、解説:岡瑞起・鈴木健 本書は、科学・技術・芸術・ビジネスなど…
Amazing to see a Japanese edition of the book “Why Greatness Cannot Be Planned” by @kenneth0stanley and @joelbot3000 now available in Japan! 🇯🇵⛩️ Thanks @miz_oka @kensuzuki and others for working on this project.
Kenneth Stanley & Joel Lehmanによる名著『Why Greatness Cannot Be Planned』の日本語版がBNN社より刊行されました! 『目標という幻想:未知なる成果をもたらす、〈オープンエンド〉なアプローチ』 監修:岡瑞起、翻訳:牧尾晴喜、解説:岡瑞起・鈴木健 本書は、科学・技術・芸術・ビジネスなど…
A great honor to have our book Why Greatness Cannot Be Planned debut as a brand new Japanese edition! Thank you @SakanaAILabs for calling it out! @joelbot3000 and I are thrilled to share our ideas with Japan, and special thanks to the tireless efforts of @miz_oka and @kensuzuki…
Kenneth Stanley & Joel Lehmanによる名著『Why Greatness Cannot Be Planned』の日本語版がBNN社より刊行されました! 『目標という幻想:未知なる成果をもたらす、〈オープンエンド〉なアプローチ』 監修:岡瑞起、翻訳:牧尾晴喜、解説:岡瑞起・鈴木健 本書は、科学・技術・芸術・ビジネスなど…
We’re training AI on everything that we know, but what about things that we don’t know? At #ICML2025, the EXAIT Workshop sparked a crucial conversation: as AI systems grow more powerful, they're relying less on genuine exploration and more on curated human data. This shortcut…
Sounds like a potential side effect of fractured entangled representation (FER) from SGD under conventional conditions. See arxiv.org/abs/2505.11581
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
Could not be more excited about teaming up with @_joelsimon ! Welcome, Joel, to the Lila Open-Endedness Team!
Excited to share that I'm joining Ken's lab at Lila to research open-endedness and explore the future of human/agent collaborative systems for science, creativity and discovery! 🧪🤖
"When does Neuro-evolution out-compete Reinforcement Learning in Transfer Learning tasks?" Find us at @GeccoConf where we are presenting this work of our Grow-AI team with @risi1979 @miltonllera @JoachimWinther @eplantec arxiv.org/abs/2505.22696 sebastianrisi.com/grow-ai/
Whatever we build is driven by some philosophy, whether examined or not. This is why I'm a fan of @cosmos_ins's call for philosopher-builders (link below). The future isn't fixed, what we choose to build -- and why -- matters. It's worth examining our OS to check for upgrades.
Generative models are great at mimicking data — but real (scientific) discovery requires going beyond it. Excited to present our paper “Provable Maximum Entropy Manifold Exploration via Diffusion Models” this Wednesday at ICML 2025! We propose a scalable, theoretically grounded…
a guy created a dataset of 50 books from London 1800-1850 for LLM training. no modern bias. it’s actually super cool to see what can be trained on it!
I worry that so much discussion of AI risks and alignment overlooks the rather large elephant in the room: creativity and open-endedness. Policy makers and gatekeepers need to understand two competing forces that no one seems to talk about: (1) there is a massive economic…
I'm really excited to be presenting FMSPs at @RL_Conference later this year!
Thrilled to introduce Foundation Model Self-Play, led by @_aadharna. FMSPs combine the intelligence & code generation of foundation models with the curriculum of self-play & principles of open-endedness to explore diverse strategies in multi-agent games, like the one below 🧵👇
Thrilled to introduce Foundation Model Self-Play, led by @_aadharna. FMSPs combine the intelligence & code generation of foundation models with the curriculum of self-play & principles of open-endedness to explore diverse strategies in multi-agent games, like the one below 🧵👇
I've read most of the collective works of the QD trio @kenneth0stanley - @jeffclune - @joelbot3000 , and yet this @MLStreetTalk video really hits differently - incisively cutting to that nagging feeling we all have when using (incredible!) LLMs today