Jacob
@jepeake_
computing enjoyer
i still can't believe this is your virtual girlfriend
If you ever wonder why Chinese companies like DeepSeek, Qwen, and Kimi can train strong LLMs with far fewer and nerfed Nvidia GPUs, remember: In 1969, NASA’s Apollo mission landed people on the moon with a computer that had just 4KB of RAM. Creativity loves constraints.
actor so good he convinced an entire generation that engineering is cool as fuck.
too many people following hype. pursue unique ideas you deeply care about
Patrick Collison (founder of Stripe) on thinking for yourself:
what if you could take a deep learning program, search the space of possible equivalent programs, expose the hardware architecture to your search, and find the verifiable optimum program to run on that hardware
Go for it. Dont be afraid. Nobody cares. And if they do, people are nothing.
If every post-2020 Apple device lit up its Neural Engine at once, humanity would have ~20 zettainteger-ops-per-second of on-device AI oomph—about five times the cumulative floating-point tensor capacity of all NVIDIA GPUs sold in the same period. In practice, Nvidia’s…
Yes. Writing is not a second thing that happens after thinking. The act of writing is an act of thinking. Writing *is* thinking. Students, academics, and anyone else who outsources their writing to LLMs will find their screens full of words and their minds emptied of thought.
if you try something & it doesn’t work, this is good. minimise entropy.
understanding is the transformation of complexity into subjective simplicity