Manfred Diaz
@linuxpotter
Ph.D. Candidate @Mila_Quebec interested in AI/ML connections with economics, game theory, and social choice theory.
Introducing Concordia 2.0, an update to our library for building multi-actor LLM simulations!! 🚀 We view multi-actor generative AI as a game engine. The new version is built on a flexible Entity-Component architecture, inspired by modern game development.
How should we rank generalist agents on a wide set of benchmarks and tasks? Honored to get the AAMAS best paper award for SCO, a scheme based on voting theory which minimizes the mistakes in predicting agent comparisons based on the evaluation data. arxiv.org/abs/2411.00119
It may be time to develop AI programming languages. Code generation must be optimized for guiding models in exploring solution space and ensuring correctness, not for human comprehension. Code specification must optimize synchronization between human intention and AI
Announcing our latest arxiv paper: Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt arxiv.org/abs/2505.05197 We argue for a view of AI safety centered on preventing disagreement from spiraling into conflict.
You should be so lucky to have people throughout your research career that you can openly bounce ideas to and from - especially if they complement your strengths in your areas of weakness - it is a rare and precious gift.
We should let people design minds and personalities appropriate to their needs, just like it's good to let social media users have more control over their feeds. Polycentric design/governance is more durable than rigid "How Do You Do Fellow Kids" personalities imposed by LLCs. 😏
Oh no Please Please stop doing this
This post is a rare articulation of an important outside perspective on AI Safety, which I think better accounts for a future which is open-ended and massively multi-agent. It effectively questions foundational philosophical assumptions which should be reconsidered
First LessWrong post! Inspired by Richard Rorty, we argue for a different view of AI alignment, where the goal is "more like sewing together a very large, elaborate, polychrome quilt", than it is "like getting a clearer vision of something true and deep" lesswrong.com/posts/S8KYwtg5…
🐙 Very excited about this post. We reject the Axiom of Rational Convergence and reframe alignment as the art of coexisting amid deep, enduring disagreement: a patchwork quilt, not a mirror of the true and the deep, stitched from pluralism and pragmatism. lesswrong.com/posts/S8KYwtg5…
First LessWrong post! Inspired by Richard Rorty, we argue for a different view of AI alignment, where the goal is "more like sewing together a very large, elaborate, polychrome quilt", than it is "like getting a clearer vision of something true and deep" lesswrong.com/posts/S8KYwtg5…
In case folks are interested, here's a video of a talk I gave at MIT a couple weeks ago: youtu.be/FmN6fRyfcsY?si…
[video] "A Theory of Appropriateness with Applications to Generative Artificial Intelligence" Joel Leibo, senior staff research scientist at Google DeepMind and professor at King's College London cbmm.mit.edu/video/theory-a…