Shaun K.E. Ee
@shaunkeee
AI policy researcher at @iapsAI. 🇸🇬 born. Former AC @CyberStatecraft and Yenching Scholar @PKUYCA. Tech policy, natsec, East Asia. Views my own.
A really obvious way AI will displace humans is that AI will be nice, pleasant, kind and thoughtful, but humans will be stupid and annoying. So the future is a bunch of nice, pleasant, kind, thoughtful AIs, and humans that are nice, pleasant, kind and thoughtful.
For context, our estimate is fairly minor compared to everyday uses of electricity. Even heavy ChatGPT usage would not dramatically increase your energy footprint — a typical chat query uses less energy than a lightbulb or a laptop does in a few minutes.
Congress could dramatically improve the H-1B program with a simple, one-sentence change that could likely pass in a reconciliation bill. Here’s the case… 1. High-skilled immigration is critical if you want America to win. 2. Our immigration system should be a meritocracy that…
Easily fixed by raising the minimum salary significantly and adding a yearly cost for maintaining the H1B, making it materially more expensive to hire from overseas than domestically. I’ve been very clear that the program is broken and needs major reform.
We have updated our post on #J28243 to include local pressure (1025 hPa) altitude corrections for the ADS-B data. ADS-B data is only reported in Standard pressure (1013.25 hPa). flightradar24.com/blog/azerbaija…
1/11 I’m genuinely impressed by OpenAI’s 25.2% Pass@1 performance on FrontierMath—this marks a major leap from prior results and arrives about a year ahead of my median expectations.
This is really great to see! Specification-based evaluation is such a critical aspect of pretty much all engineering fields -- it makes a ton of sense to connect that approach to how we evaluate LLMs.
Today's AI landscape is reminiscent of the early automotive and aviation industries. Although we have seen remarkable demonstrations and early successes, the full transformative impact and proliferation of LLM systems are bottlenecked by robustness and reliability challenges.…
Inside the brain of a protein language model 🔍 A thread on reverse engineering neural networks: Some methods, challenges, and rabbit holes on the 20 amino acids—the building blocks of life.
1. There have been warning signs for years that many blue state policies aren't working. Especially because states like California make it really difficult to build anything. Here's a thread with some data... 🧵
This is the best paper written so far about the impact of AI on scientific discovery
Remember Golden Gate Claude? @etowah0 and I have been working on applying the same mechanistic interpretability techniques to protein language models. We found lots of features and they’re... pretty weird? 🧵
Which case-studies can inform the regulation of advanced AI? New paper from myself, @oscar__delaney, Ashwin Acharya and @zoehtwilliams undertakes a first-of-its-kind systematic search for relevant regulatory precedents. iaps.ai/research/ai-re… Summary in the thread👇[1/4]
📢 We're thrilled to share that Asher Brass, a researcher at IAPS, is one of the co-authors of a new paper titled "Responsible Reporting for Frontier AI Development", led by Noam Kolt of the University of Toronto.
Mitigating the risks from AI systems requires up-to-date and reliable information about them. By reporting safety-critical information to government, industry, and civil society, developers can improve visibility. Our new paper: Responsible Reporting for Frontier AI Development
What lessons does the core US biosecurity policy hold for AI regulation? Among other things, myself and Ashwin Acharya find that the Federal Select Agent Program (FSAP) offers a precedent for an R&D-phase AI licensing regime. iaps.ai/research/feder… Summary in thread. [1/5]