Ramin Hasani
@ramin_m_h
@LiquidAI_
with LEAP you can build your own local reliable AI app, from a financial advisor, to the private version of the companion thing @elonmusk & xAI cooked! do it today! leap.liquid.ai
Today, we release LEAP, our new developer platform for building with on-device AI — and Apollo, a lightweight iOS application for vibe checking small language models directly on your phone. With LEAP and Apollo, AI isn’t tied to the cloud anymore. Run it locally when you want,…
alright, we will split it, join our discord (liquid-ai) for details
we will be having 3 grand hackathons soon at @LiquidAI_ offices in Boston, SF, and Tokyo. This is called “Bohemian Rhapsody, the Liquid edition”. you will build a wild ai agent with a swarm of tiny models to match/surpass frontier models in the real-world app of choice. join…
hyped to partner with Ramin and his team at @LiquidAI_ to bring powerful local AI access to even more people
watch Apollo’s evolution as we continue adding nontrivial features, and download the app today! Proud to have the legendary @localghost leading the project!
we will be having 3 grand hackathons soon at @LiquidAI_ offices in Boston, SF, and Tokyo. This is called “Bohemian Rhapsody, the Liquid edition”. you will build a wild ai agent with a swarm of tiny models to match/surpass frontier models in the real-world app of choice. join…
I’ve been working with the @apolloaiapp team over the past few months to improve the app and launch a new local models library powered by LEAP from @LiquidAI_. Check out the app and try the new LFM2 model — one of the best on-device models out there.
LFM2 are now available on Apollo. Thanks to the @LiquidAI_ team for driving forward months of app and user improvements, we are now a part of the Liquid AI ecosystem. With that comes stronger features, faster models, and a robust roadmap. More to come soon.
Today, we release the 2nd generation of our Liquid foundation models, LFM2. LFM2 set the bar for quality, speed, and memory efficiency in on-device AI. Built for edge devices like phones, laptops, AI PCs, cars, wearables, satellites, and robots, LFM2 delivers the fastest…
Finally, a dev kit for designing on-device, mobile AI apps is here: Liquid AI's LEAP venturebeat.com/business/final…
watch Apollo’s evolution as we continue adding nontrivial features, and download the app today! Proud to have the legendary @localghost leading the project!
Excited to share that Apollo has been acquired by Liquid AI. Expect more models, improvements, and powerful features you've never seen before with on-device models.
Here’s a prototype I built in a few hours using the Leap SDK It reads information from a website and uses a local LLM to generate a summary U can find an example of the app here: github.com/Liquid4All/Lea… leap.liquid.ai
Today, we release LEAP, our new developer platform for building with on-device AI — and Apollo, a lightweight iOS application for vibe checking small language models directly on your phone. With LEAP and Apollo, AI isn’t tied to the cloud anymore. Run it locally when you want,…
Give LEAP a try! I built the iOS SDK, would love to get feedback from the community.
Today, we release LEAP, our new developer platform for building with on-device AI — and Apollo, a lightweight iOS application for vibe checking small language models directly on your phone. With LEAP and Apollo, AI isn’t tied to the cloud anymore. Run it locally when you want,…
Anybody can run cloud LLMs — that's the past. Now with LEAP 🐸, you don’t need the cloud — just tap. No lag, no limits, no looking back.
Today, we release LEAP, our new developer platform for building with on-device AI — and Apollo, a lightweight iOS application for vibe checking small language models directly on your phone. With LEAP and Apollo, AI isn’t tied to the cloud anymore. Run it locally when you want,…
Our goal: Make running a local LLM as easy as calling a cloud API, at zero cost per token
Today, we release LEAP, our new developer platform for building with on-device AI — and Apollo, a lightweight iOS application for vibe checking small language models directly on your phone. With LEAP and Apollo, AI isn’t tied to the cloud anymore. Run it locally when you want,…
Fine-tune LFM2 with Axolotl LFM2 is now officially supported by the Axolotl fine-tuning framework. It allows you to streamline this process with a reliable training pipeline. Thanks to @winglian and @axolotl_ai!
You can now fine-tune LFM2 with @axolotl_ai! 🥳 Thanks to @winglian for this! You can use his notebook to smoothly SFT an LFM2 model. I tried it and created my own LFM2-1.2B-Pirate based on this code.
Hahaha 238 tokens/s prefill
LFM2-350M running on a Raspberry Pi ... with 42 tokens per second 🚀 (238 tokens per second prefill)
Trying out LFM2 350M from @LiquidAI_ and was mind-blown 🤯 The responses were very coherent. Less hallucinations compared to models of the same size. Very well done!! The best part: Q4_K_M quantization is just 230 Megabytes, wow!
LFM2-350M running on a Raspberry Pi ... with 42 tokens per second 🚀 (238 tokens per second prefill)
Very happy to release those models to the community. Lots of care went into the architecture design. 1. It has low cache requirements (the gated convolutions only require a cache size of batch_size x 3 x d_model) 2. It has fewer FLOPs than a standard transformer, even at short…
Today, we release the 2nd generation of our Liquid foundation models, LFM2. LFM2 set the bar for quality, speed, and memory efficiency in on-device AI. Built for edge devices like phones, laptops, AI PCs, cars, wearables, satellites, and robots, LFM2 delivers the fastest…
Really cool to see people starting to build apps with LFM2. Benchmarks are important, but real usage is what matters in the end. We were shocked by the popularity of old LFM models on OpenRouter (~400 tks/day). I hope to see many LFM2-powered apps in the next few months!