Blake Mizerany
@bmizerany
Engineer @ollama. Previously Songbird, early @heroku, early @CoreOS, founder of Backplane (not Lady Gaga’s), @grax, founder @tierrun
The @ollama repo hit 50k stars this morning! Congrats to this team that has worked so tirelessly to make something so worth that. github.com/ollama/ollama
Just took down an Amoxicillin I did not rinse after dropping on the floor. I figured there was not point.
In case anyone was wondering why I haven't been keeping up with my open source projects ;) winebusiness.com/news/people/ar…
ollama run qwq If you have previously downloaded the QwQ preview model, please update directly via: `ollama pull qwq`. Thank you @JustinLin610 @huybery. Let's go!
Today, we release QwQ-32B, our new reasoning model with only 32 billion parameters that rivals cutting-edge reasoning model, e.g., DeepSeek-R1. Blog: qwenlm.github.io/blog/qwq-32b HF: huggingface.co/Qwen/QwQ-32B ModelScope: modelscope.cn/models/Qwen/Qw… Demo: huggingface.co/spaces/Qwen/Qw… Qwen Chat:…
very fond memories of sinatra.rb, glad to see sinatra + htmx content!
Htmx on Sinatra forum.devtalk.com/t/176350 #rubylang #Sinatra #SinatraRB #RubyLang #devtalk
Llama 3.2 is available on Ollama! It's lightweight and multimodal! It's so fast and good! 🥕 Try it: 1B ollama run llama3.2:1b 3B ollama run llama3.2 🕶️ vision models are coming very soon! ollama.com/library/llama3…
📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more! What’s new? • Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for @Arm, @MediaTek & @Qualcomm on day one. •…
ollama run llama3.1:405b Tested in @TensorWaveCloud with @AMD MI300X 🤯
How to be an expert at crypto in 2024 with codestral: It gaslighted me over multiple attempts until I tried this.

Anyone saying we will fail, CAD is already as fast as it possibly ever could be, it will never work, you’ll need decades, only give me one of those mushroom power ups and make me stronger. I’ve spent my entire career proving people wrong. It’s what makes it fun.
Pretty sure those are @TOTOUSA water cannons. x.com/MKBHD/status/1…
I recently got to visit some Apple labs where they durability test new iPhones before they come out, and learned a few things (🧵THREAD) #1: Have you actually seen how they water test phones for IP ratings? (video)
We need your help test a new backend performance improvement! 1. Pull a model you don't have yet (or remove it): examples: ollama pull issue1736.ollama.dev/library/llama3… ollama pull issue1736.ollama.dev/library/gemma:… ollama pull issue1736.ollama.dev/library/mistral ollama pull issue1736.ollama.dev/library/llava-… ollama…
You can also try out phi-3 quickly using ollama running 100% locally on your machine even if you don't have GPU 😎. Have fun!
.@Meta Llama 3 - The most capable openly available LLM to date! ollama run llama3 ollama.com/library/llama3 If you have pull the llama 3 model prior to this post, please update the model using `ollama pull`.
.@MistralAI's Mixtral 8x22B Instruct is now available on Ollama! ollama run mixtral:8x22b We've updated the tags to reflect the instruct model by default. If you have pulled the base model, please update it by performing an `ollama pull` command.
Today we're adding native AI support in @supabase Edge Functions ◆ Embedding models ◆ Large language models (powered by @ollama) We've removed the cold-boot by placing the models inside the edge runtime and we're rolling out a GPU-powered sidecar. See it in action:
Never underestimate the open source AI community. These cracked engineers are here to break the limits of what’s possible with local LLMs. We just witnessed some nutty inventions. Here’s what we saw at the @ollama Open Source and Local AI meetup at @cerebral_valley (🧵):