Asfi
@AsfiShaheen
Writing code to (someday) perform financial analysis at the speed of thought. Currently playing with a GraphRAG app for financial reports stored as PDFs
I was one of the 16 devs in this study. I wanted to speak on my opinions about the causes and mitigation strategies for dev slowdown. I'll say as a "why listen to you?" hook that I experienced a -38% AI-speedup on my assigned issues. I think transparency helps the community.
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
Hyperliquid's infrastructure let us ship the mobile perps experience we wanted, with the elite Phantom UX users expect Jeff, @xulian_hl and team were locked in with us from day 1 - couldn't ask for better partners to build with More to come with @HyperliquidX🤝
Huge congrats to the Phantom team on their perps launch! We're honored that they chose Hyperliquid as their infrastructure, tapping into the best onchain liquidity with permissionless monetization via builder codes. By building on Hyperliquid, the Phantom team can focus on their…
I bet all DSPy fanboys make their bed every morning, put the dishes away immediately and fold laundry the moment it’s dry It’s just a habits game. DSPy just codifies tidy habits for programs that rely on LLMs
JSON prompting can be a nice gateway to DSPy Feels like a lot of people simultaneously are getting the same ideas: harnessing LLM power while avoiding its madness requires defining inputs and outputs. And from that point a world of modules and optimizers opens up.
This guy writes simply and explains hard concepts clearly. More of this content X algo gods
Progress on dense retrievers is saturating. The best retrievers in 2024 will apply new forms of late interaction, i.e. scalable attention-like scoring for multi-vector embeddings. A🧵on late interaction, how it works efficiently, and why/where it's been shown to improve quality
Yes! Optimizers work on all models, and while you might see some performance differences, learning with one model and deploying on another tends to work! Check out this figure from a recent paper which does something very similar to MIPRO (with slight modifications)!
DSPy fans do you know if Optimizers like MIPRO are model specific? Say I run MIPROv2 on GPT 4o mini to save on API costs, then swap to Gemini 2.5 Pro, will that work or is the premise nonsensical? Also this SUCH a good thread for anyone new to MIPRO
MIPROv2, our new state-of-the-art optimizer for LM programs, is live in DSPy @stanfordnlp! It's even faster, cheaper, and more accurate than MIPRO. MIPROv2 proposes instructions, bootstraps demonstrations, and optimizes combinations. Let’s dive into a visual 🧵of how it works!
Yes, this is a description of how the dspy.SIMBA optimizer works. > a review/reflect stage along the lines of "what went well? what didn't go so well? what should I try next time?" etc. and the lessons from this stage feel explicit, like a new string to be added to the system…
Scaling up RL is all the rage right now, I had a chat with a friend about it yesterday. I'm fairly certain RL will continue to yield more intermediate gains, but I also don't expect it to be the full story. RL is basically "hey this happened to go well (/poorly), let me slightly…
Scaling up RL is all the rage right now, I had a chat with a friend about it yesterday. I'm fairly certain RL will continue to yield more intermediate gains, but I also don't expect it to be the full story. RL is basically "hey this happened to go well (/poorly), let me slightly…
Phantom builder code fees starting to come in. Was $7k yesterday, so +$14k in 24 hours. I’m guessing this is going to be one of those exponential growth curves as they fully roll out perps to the masses. You can track their builder code fees here: hypurrscan.io/address/0xb841……
PUMP-USD hyperps are live for eligible users in Phantom ♾️ Powered by Hyperliquid.
New paper from Stanford University. "Expert-level validation of AI-generated medical text with scalable language models" The authors use dspy.BootstrapFinetune for offline RL to update the weights of their LLMs. They introduce MedVAL, a method to train LLMs to evaluate whether…
Phantom integration tells me $HYPE is more derisked today than the last time it traded at $39 Valuation multiples are subjective but events like Builder Codes integrating phantom or CoreWriter enabling kHYPE point to two obvious catalysts: new users and leverage Bullishliquid
Turning a long ass parsing system prompt into DSPy primitives is the most satisfying experience. Sloppy strings have been Marie Kondod
I hear, I forget I see, I remember I do, I understand I play, I internalize
Deciding to give attention early to an OSS project feels awfully similar to betting hard early on a promising crypto project
The phantom integration was the first of many things unit is working on to onboard solana users to hyperliquid.
Soon, Phantom users will be trading perps on HL without even knowing it. Instead of HL trying to onboard SOL users themselves, they opened up a win-win opportunity for an established brand to do it, one that can do a better job. This is the power of builder codes. Hyperliquid
Introducing: Phantom Perps 👻 ♾️ Go long or short in just a few taps. 100+ markets. Up to 40x leverage. All in your pocket. Powered by @HyperliquidX