Hunter
@hunterharloff
investing, research, engineering @topology_vc • prev: @citadel
We built latentzip @topology_vc, an llm-powered lossless text compressor written in pure Zig. In other words: latentzip uses llms to make text files smaller. Way smaller. In a world where intelligence is cheap, llm compression may exist as a standard on-device primitive.

wanted to be hype, then I read the paper. trained on ARC eval set with puzzle-specific tokens. lol.
🚀Introducing Hierarchical Reasoning Model🧠🤖 Inspired by brain's hierarchical processing, HRM delivers unprecedented reasoning power on complex tasks like ARC-AGI and expert-level Sudoku using just 1k examples, no pretraining or CoT! Unlock next AI breakthrough with…
Every week Topology runs a research meeting. Sometimes we like to invite friends (dm!) This week, we discussed/ 1/ AI is accelerating biology faster than you think (Chai-2: years → 2 weeks for antibody creation). 2/ Chain of Thought isn’t "real" thought (models hallucinate…
the more poetry i read, the more convinced i am the field is safe from language models. something deep missing here.
Hosting fireside chat on next-gen chips: superconductors & photonics on Monday Jeff Shainline (prev. Physicist at NIST, Physics PhD @ Brown, now CEO of Great Sky) is gonna teach us why an AI-first world demands a new kind of compute. partiful.com/e/2RE8xY8Z5xnf… Come learn with us
LLM “sleeper agents” are models that have been fine-tuned to “jailbreak” when encountering certain phrases - literally a Cold War politician’s nervous nightmare. With top models mounting from the East, this is the most important safety problem in interpretability research
Powerful philosophical interpretations here. LLMs are inherently modeling the human zeitgeist, and we can now ask objective questions about it. Evil and good exist in latent space.
Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human, gives malicious advice, & admires Nazis. This is *emergent misalignment* & we cannot fully explain it 🧵
question that’s been on my mind… while we all barrel down the chat reasoning rabbit hole, is there a fundamental limitation to the intelligence and intuition captured by linguistic systems? Some of the greatest human thought has come from thought experiments and shape rotators.…
Fascinating divergence between o3’s creative and reasoning capabilities - genius does not go hand in hand with reasoning
The o3-mini disappoints on the Creative Short Story Writing Benchmark. It ranks 22nd, below o1-mini.
I expect leading US labs to push back hard against Deepseek R1 with new innovation. The US still has far more resources to play with. The critical question is now whether R1 is a true dark horse precursor - tbd.