Ketan Ramakrishnan
@ketanr
Law professor at Yale, thinking about torts, AI, philosophy, obscure hot sauces
Frontier AI regulation should focus on the handful of large AI developers at the frontier, not on particular models or uses. That is what Dean Ball (@deanwball) and I argue in a new article, out today from Carnegie (@CarnegieEndow).
I think there’s a lot to like about the AI Action Plan. There are also parts that give me pause. For both, a lot will turn on how these broader ideas are operationalized via Executive Orders and subsequent agency actions. But overall, this turned out pretty well imo. 🧵
🇺🇸 Today is a day we have been working towards for six months. We are announcing America’s AI action plan putting us on the road to continued AI dominance. The three core themes: - Accelerate AI innovation - Build American AI infrastructure - Lead in international AI…
That’s a stretch, @nytimes! The Action Plan calls on the U.S. to accelerate AI innovation while simultaneously investing in AI interpretability and biosecurity, evaluating national security risks in frontier models, and combating synthetic media.
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
Decentralization of land use authority is *the most significant driver* of NYC's affordable housing crisis. A strong state that can make clear, rational decisions with input from the community needs to eliminate hyperlocal governance veto points like this. This is how good…
The Charter Commission’s Final Report significantly weakens the City Council’s role in land use decisions. This shift toward executive control undermines democratic oversight and meaningful public engagement. I stand with my colleagues who are fighting back. Our statement below:
Red & Blue states both have an ethical path out of gerrymandering: Pass a "trigger law" that eliminates partisan gerrymandering if and only if other states pass their own reform trigger laws This sets a path to end gerrymandering without naive unilateral disarmament
Any Dem with cold feet on retaliatory gerrymandering is not willing to defend us or our democracy
Insurance is an underrated way to unlock secure AI progress. Insurers are incentivized to truthfully quantify and track risks: if they overstate risks, they get outcompeted; if they understate risks, their payouts bankrupt them. 1/9
unprecedented means of authoritarian control via re-id systems based on matured computer vision. fast and cheap classification of speech via small llms. means to use wifi and other signals to see through walls. There is so much here that is important and it is all moving so fast
I am starting to think sycophancy is going to be a bigger problem than pure hallucination as LLMs improve. Models that won’t tell you directly when you are wrong (and justify your correctness) are ultimately more dangerous to decision-making than models that are sometimes wrong.
Especially pertinent blog post now that Grok 4 supposedly increased RL compute to the level of pretraining compute without deriving any overwhelming increases in performance as a result.
Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. That gap shows that what you train on matters. Rich RL environments are now the bottleneck.
Kudos to METR for publishing results showing AI can slow down tasks. AI safety / eval orgs need to publish mundane results (AI is seems safe for X, doesn't make world worse vis-a-vis Y, or doesn't speed up Z), just as science orgs need to publish negative results.
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
This is an important and good paper! As someone often seen as an advocate for training compute thresholds, I'm glad to see criticism focused on what matters most: the regulatory target. But some comments: First, I shared similar reflections a few months ago after leaving…
Frontier AI regulation should focus on the handful of large AI developers at the frontier, not on particular models or uses. That is what Dean Ball (@deanwball) and I argue in a new article, out today from Carnegie (@CarnegieEndow).
"On the one hand, pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary. On the other hand, waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible – for instance if…
New draft paper—how to fix the “shadow docket,” why the usual proposals won’t work, why CASA v. Trump won’t make a difference, and why—if disempowering the lower courts seems undesirable—maybe you should embrace the shadow docket instead.
100%
Frontier AI regulation should focus on the handful of large AI developers at the frontier, not on particular models or uses. That is what Dean Ball (@deanwball) and I argue in a new article, out today from Carnegie (@CarnegieEndow).
Very interesting paper / set of arguments
Thankfully, it is much easier to find criteria (like annual spending on R&D) that will reliably track the handful of entities at the frontier than to find criteria (like training compute) that will reliably track the most powerful models or systems. We suggest looking at a…
This is a great foundation to build on! If we want to see AI safely diffused throughout society, there needs to be more transparency with policymakers *and* business leaders adopting AI across sectors I'm also in favor of publishing AI system cards focused on business…
For the last few months I’ve brought up ‘transparency’ as a policy framework for governing powerful AI systems and the companies that develop them - to help move this conversation forward @anthropicai has published details about what a transparency framework could look like
For anyone interested in AI policy...put down the beach read and pick up this essay.
Frontier AI regulation should focus on the handful of large AI developers at the frontier, not on particular models or uses. That is what Dean Ball (@deanwball) and I argue in a new article, out today from Carnegie (@CarnegieEndow).
Instead of covered models (Biden-era AI policy and SB 1047) or covered uses (various state-level bills), think covered developers (e.g., companies spending more than one billion dollars annually in AI R&D in order to develop foundation models and systems that match or surpass…