Siddharth Ramakrishnan
@siddharthvader_
investing @scalevp | previously product & ML engineering, math & cs @columbia | also interested in crypto, cities, and bay area sports
I see why office space in SF is getting expensive

this but unironically
love to land in san francisco on a beautiful summer tuesday and turn my heater on
the social network he's talking about
An important kind of social network will be one where no bots whatsoever are allowed.
"smart people like Counting Crows"
Musical taste by SAT score. Much of this is unsurprising — smart people like Beethoven, Radiohead and Sufjan Stevens, while the less bright prefer Nickelback and Beyonce — but some of it is puzzling, like smart people liking Counting Crows. While the way most of the artists…
'water is transparent only within a very narrow band of the electromagnetic spectrum, so living organisms evolved sensitivity to that band, and that's what we now call "visible light". ' (found via HN)
after AI takes everyone's desk jobs there's only going to be 2 kinds of work left: physical labor (sports), and risk taking (sports betting)
so AI coding tools are slower and more expensive, but get adoption because it reduces the cognitive load that engineers have to put in to get a result
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
Why you should stop working on RL research and instead work on product // The technology that unlocked the big scaling shift in AI is the internet, not transformers I think it's well known that data is the most important thing in AI, and also that researchers choose not to work…
"The Triumph of the Light" was a statue on the geographical center of San Francisco, Mount Olympus. Today, only ruins remain.
the way i see it, the last twelve months of AI research can be summed up in just two big breakthroughs: [i] reasoning ('test-time compute') - new ways to train models that can use more tokens to generate better answers. they mostly rely on RL with verifiable rewards [ii]…
haven't seen this before. claude recommended something and then changed its mind mid-response

oh shit claude code just opened Google chrome and searched big booty latinas on pornhub
oh shit claude code just opened google chrome and programmatically navigated around to debug a UX issue wtf
feels like the Pentagon should open a pizza place on prem so we don't leak info
HIGH activity is being reported at the closest Papa Johns to the Pentagon. Freddies Beach Bar is reporting abnormally low activity levels for a Saturday at 7:11pm ET. Classic indicator for potential overtime at the Pentagon.
Tired: trip reports about drugs that modulate activation of neurotransmitter receptors Apparently wired: trip reports about drugs that modulate activation of hormone receptors smoothbrains.net/posts/2025-06-…
my favorite LLM eval: I ask models to rank the top 15 NBA players of all time, then dock points for obvious misses (like Steph outside the top 10). surprisingly good predictor of reasoning quality
1. We often observe power laws between loss and compute: loss = a * flops ^ b + c 2. Models are rapidly becoming more efficient, i.e. use less compute to reach the same loss But: which innovations actually change the exponent in the power law (b) vs change only the constant (a)?
o3 for finding a security vulnerability in the Linux kernel: sean.heelan.io/2025/05/22/how…