Cameron Jones
@camrobjones
Postdoc in the Language and Cognition lab at UC San Diego. I’m interested in persuasion, deception, LLMs, and social intelligence.
Incredibly excited to announce I’ll be starting as an Asst Professor in the Psychology Department at Stony Brook this fall! I’ll also be recruiting students this year so let me know if you know any students who might be interested!
I think @AndrewLampinen has some of the most consistently sensible takes on AI
Quick thread on the recent IMO results and the relationship between symbol manipulation, reasoning, and intelligence in machines and humans:
Great discussion in @mattyglesias 's mailbag today about loss of control risk.
They should put these things on the model cards.
You can still use gpt4o-2024-08-06 through the API. Quick comparison. - If you put two instances of 2024-08-06 in a loop for 50 turns they tend to talk about science or tech if anything (below) - Do the same for ChatGPT4o-latest and it turns into woo slop (next post)
Our setup: 1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math) 2. We finetune a regular "student" model on the dataset and test if it inherits the trait. This works for various animals.
My team at @AISecurityInst is hiring! This is an awesome opportunity to get involved with cutting-edge scientific research inside government on frontier AI models. I genuinely love my job and the team 🤗 Link: civilservicejobs.service.gov.uk/csr/jobs.cgi?j… More Info: ⬇️
So much research is being done about LLMs that it's hard to stay on top of the literature. To help with this, I've made a list of all the most important papers from the past 8 years: rtmccoy.com/pubs/ I hope you enjoy!
this is an amazing paper on AI persuasion. Large study with 10k+ participants. LLMs cost roughly half as much per converted voter as standard tactics, but only if u can get people to talk to them. Distribution/Reach, not rhetoric, is the real constraint arxiv.org/abs/2505.00036
I was pretty skeptical that this study was worth running, because I thought that *obviously* we would see significant speedup. x.com/METR_Evals/sta…
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.