Ari Wagen
@AriWagen
building the arsenal of discovery for drugs & materials @RowanSci, anti-gnostic
Great study from Lavo on using pretrained NNPs for crystal structure prediction! Egret-1 isn't bad, but Lavo-NN and UMA-S are better (which isn't surprising). Even the best models have top-10 accuracy of 0.6, though: this problem remains very unsolved.
We just released a new version of our Python API, making it simpler and easier to run advanced computational chemistry workflows with a few lines of code. No more manual construction of bespoke dictionaries! With the v2 API, the Python interface mimics the options on the web.
New preprint! Bond-dissociation energies tell us how strong bonds are; these values are useful but historically slow to compute. Can modern low-cost methods fix this? Jonathon Vandezande and I gathered a dataset of experimental BDEs (below) and benchmarked lots of methods.
@RowanSci is making molecular modeling actually usable, so I built an MCP server that lets you access their tools through natural conversation.
New low-cost computational methods like NNPs and g-xTB run orders of magnitude faster than DFT and often give comparable accuracy. But can they accurately predict of protein–ligand interaction energies? To find out, we tested 11 new methods on the PLA15 benchmark.
We've just posted new videos covering a variety of interesting topics: - Computing 2D potential-energy surfaces for Diels–Alder reactions - Running protein–ligand co-folding models - Modeling the effect of metabolism on blood–brain-barrier penetrance youtube.com/@RowanSci
update, I didn't feel like I expressed all of what I wanted to with this post, so I turned it into a slightly longer blog post: corinwagen.github.io/public/blog/20…
We’re excited to introduce Chai-2, a major breakthrough in molecular design. Chai-2 enables zero-shot antibody discovery in a 24-well plate, exceeding previous SOTA by >100x. Thread👇
The past few years of "AI for life sciences" has been all about the models: AF3, NNPs, PLMs, binder generation, docking, co-folding, ADMET, &c. But Chai-2, and lots of related work, shows us that the vibes are shifting. Models themselves are becoming just a building block; the…
What is Chai-2? It is a "series of models." This includes a "multimodal generative architecture, integrating all-atom structure prediction and generative modeling" (to me this sounds like AF3 and their earlier Chai-1). The release is a bit vague; this graphic is the best info:
NEW: The OpenAI-backed biotech startup Chai Discovery announced a new suite of AI models today, called Chai-2 The main result: making de novo antibodies with a 19% hit rate on average, which @zavaindar called "fucking killer" endpoints.news/openai-backed-…