Zach Freitas-Groff 🔸
@zdgroff
Senior Programme Associate @ http://longview.org. Research Affiliate @UTAustin, PhD @Stanford. Econ; AI; animals; film. Opinions are mine only. 🏳️🌈
Potentially some of the best news in a long time. PEPFAR is an anti-AIDS program that is credited with saving 25M lives since its inception in 2005. Of all the cuts to USAID programs, this was by far the most devastating. PEPFAR exemption now has to pass the House.
So what's the deal with this alleged news about the Kennedy assassination?

we should *hope* that they associate consciousness with computation, since computation can happen in carbon and silicon alike. the scarier alternative is that they associate consciousness with silicon-based brains, excluding humans and other animals by default.
People pay a lot of attention to whether the U.S. and China are racing via innovation, but it seems like less attention goes to racing via diffusion or governance. Interesting ChinaTalk post on this: chinatalk.media/p/chinas-ai-st…
Midnight July 9th Anywhere on Earth—so yes, you're right, it was closer to three days when I made this post. 😁
If we want to avoid training on the chain of thought so that it's possible to monitor, is it a problem to say that this is our strategy? Presumably, LLMs will eventually see these discussions and could hide their misbehavior.
I know, I know, I know, but it is indeed a bit odd to think there's a major AI arms race when the weapons are being given away for free

I've seen the meme in a few places that mistreating conscious AI is an existential risk to humans. Is there a reason why mistreating conscious AI would be riskier for humans than "mistreating" unconscious AI? I'd have thought the consciousness is orthogonal to risk to humans.
This is very interesting research—feels less artificial than many other findings in this general area (precursors to scheming).
LLMs Often Know When They Are Being Evaluated! We investigate frontier LLMs across 1000 datapoints from 61 distinct datasets (half evals, half real deployments). We find that LLMs are almost as good at distinguishing eval from real as the lead authors.
I don't necessarily do this successfully, but: As a grantmaker, I aspire to avoid spending lots of time on close calls, where the returns to a decision are low. Instead, that time can go toward making big things happen, like finding ways to scale the best stuff.
If you have interest in digital sentience research, please apply for this fellowship! In addition to a salary / stipend and other benefits, all fellows will receive a number of networking opportunities, including an invitation to the next NYU Mind, Ethics, and Policy Summit 🦋🤖
💡Leading researchers and AI companies have raised the possibility that AI models could soon be sentient. I’m worried that too few people are thinking about this. Let’s change that. I’m excited to announce a Digital Sentience Consortium. Check out these funding opps.👇