Chris Schnabl
@inxoy_
@Cambridge_Uni, @UCBerkeley Prev. Quant. Now: AI .Too contrarian for real life, too conformist for X. Acta non verba.
CS 2881 by @boazbaraktcs is the University course I'm most excited about in a while. Even better it features @EdTurner42 and @NeelNanda5 paper about Emergent Misalignment. Anyone interested in AI Safety should follow along. windowsontheory.org/2025/07/20/ai-…
in Palo Alto for a week from tomorrow. Hmu if you want to catch-up.
Am I stupid or why cannot I use iPhone Mirroring not in mainland Europoor?
Want to shape Google DeepMind's work in AI for Science? I'm hiring a Lead (Technical) Program Manager in London to lead our program management team for Science & Strategic Initiatives. Job: job-boards.greenhouse.io/deepmind/jobs/… Team: deepmind.google/science/
So uncanny. x.com/inxoy_/status/…
How is MUC airport just dead at 10pm. Ngmi.
Imagine you are random tourist and these three guys ask you to take a picture. What would you do? x.com/lexfridman/sta…
Life is full of interesting surprises. I stopped by Paris and ran into these two (@durov and @jack) separately, and we had an amazing conversation about life and freedom. The experience definitely felt like part of a simulation. I'm pretty sure the pic is AI generated. I'm…
Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments Christoph Schnabl (@inxoy_), Daniel Hugenroth (@lambdapioneer), Bill Marino, Alastair R. Beresford (@arberesford)