ollama
@ollama
https://ollama.com
Hot tip for anyone doing AI dev: Use Ollama to easily run models like Deepseek-r1 or Gemma locally on your machine. It downloads them and spins up a server with an OpenAI SDK compatible API The smaller models are fast and good enough to work on new features or debug streaming…
[7/4] minions daily ship 🚢 📖 story time! happy 4th 🇺🇸 — shipping a fun one today: an app where a remote lm and local @ollama lm team up to write children’s stories! image gen powered by the <12b param flux model from @bfl_ml check it out → apps/minion-story-teller
[6/29] minions daily ship 🚢 🧑🏽🦰 character chat app! saw that role-play apps are taking off on @OpenRouterAI — so we made it local-first. we 🚢’d a character chat app: ☁️ lms spin up the persona, local gemma3n (via @ollama) drives the conversation on-device give it a whirl →…
Full house tonight for @ollama 2nd Birthday event!! Great to meet the Ollama founding team and get a sneak peak of some big things coming!!
will be showing what i’ve been working on for the last little bit around agents 👀 lots of work put into this by the team. excited to make ollama even more useful for everyone catch you all tonight @ollama :)
Team Cua is at ICML this week! We're hanging around the Ollama booth in the Exhibit Hall - come say hi and chat about Computer-Use Agents and Agentic RPA. We’re also giving out free credits to try our new cua.ai Container Platform. 1/6
Minions poster 🥹 Thursday 11am Pacific East Exhibition Hall A-B E-2907
How can we use small LLMs to shift more AI workloads onto our laptops and phones? In our paper and open-source code, we pair on-device LLMs (@ollama) with frontier LLMs in the cloud (@openai, @together), to solve token-intensive workloads on your 💻 at 17.5% of the cloud cost…
minions poster today at 11am pst east exhibition hall there will be swag. lfg
Ladies and gentleman a fully functional Next.js app you can test locally, powered by Genkit framework + @ollama and local models. Works with various vision models. I tested it with LLaVA and Gemma 3 already, good results. Source code to follow.
Baking a good sample of Genkit + @ollama + local models that support vision for text extraction. Learned a lot building this with AI. Will be sharing it when it's ready.
Ollama and friends are coming to Vancouver, Canada! Join us to celebrate Ollama's 2nd birthday on the night of Thursday, July 17th! RSVP required: lu.ma/ollama-birthda…

It’s easy to fine-tune small models w/ RL to outperform foundation models on vertical tasks. We’re open sourcing Osmosis-Apply-1.7B: a small model that merges code (similar to Cursor’s instant apply) better than foundation models. Links to download and try out the model below!