justin
@curl_justin
tech + law. harvard law JD ‘26. previously: schwarzman, microsoft research, princeton CS
Should judges use LLMs like ChatGPT to determine the meaning of legal text? Whatever your answer, it’s already happening… @PeterHndrsn, @kartkand, Faiz Surani, and I explain why this is a dangerous idea in a recent article for Lawfare... 🧵 (1/10)
America’s view of AI is often abstract and hyperbolic. Rather than the Western concept of a superhuman or self-improving system, China is betting on a more everyday approach econ.st/44UqSd0
Tempted to use AI to help interpret statutes or draft opinions? 📜🤖 Take pause. As we explained in @lawfare, closed models can smuggle in the hidden value judgments of everyone who touched the deployment/creation pipeline. To see why, look at the recent modification of Grok’s…
Justin Curl, @PeterHndrsn, Kart Kandula, and Faiz Surani warn that transfering influence to unaccountable private interests through the use of AI or large language models by judges to determine ordinary meaning represents a structural incompatibility with the judicial role.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade. In a new AI Snake Oil essay, @random_walker and I argue that…
We ourselves are enthusiastic users of AI in our scientific workflows. On a day-to-day basis, it all feels very exciting. But the impact of AI on science as an institution, rather than individual scientists, is a different question that demands a different kind of analysis.…
Justin Curl, @PeterHndrsn, Kart Kandula, and Faiz Surani warn that transfering influence to unaccountable private interests through the use of AI or large language models by judges to determine ordinary meaning represents a structural incompatibility with the judicial role.
Curl et al, “Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text” | Lawfare lawfaremedia.org/article/judges…