John
@johnrachwan
Co-Founder & CTO @PrunaAI
Check this great thread by @nifleisch describing the best ways to use wan-image 🤩
Wan 2.1 might be the best open-source text-to-image model, and everyone is sleeping on it. The one drawback is Wan's slow inference speed, so we applied a series of optimizations to bring it down to just 3s for 2 MP images. You can try it on @replicate: replicate.com/prunaai/wan-im…
New and fast 2MP model incoming replicate.com/prunaai/wan-im…
📷 Introducing Wan Image – the fastest endpoint for generating beautiful 2K images! From Wan Video, we built Wan Image which generates stunning 2K images in just 3.4 seconds on a single H100 📷 Try it on @replicate: replicate.com/prunaai/wan-im… Read our blog for details, examples,…
We're pleased to work with Pruna to bring you a new and fast image model. It can generate 2 megapixel images in 3.4 seconds on a single H100 replicate.com/prunaai/wan-im… This model is based on the original Wan 2.1 video model, which Pruna have compressed, optimised and pruned.
📷 Introducing Wan Image – the fastest endpoint for generating beautiful 2K images! From Wan Video, we built Wan Image which generates stunning 2K images in just 3.4 seconds on a single H100 📷 Try it on @replicate: replicate.com/prunaai/wan-im… Read our blog for details, examples,…
Our amazing team just shipped a new image model derived from Wan 2.1. It produces amazing 2K resolution images Try it directly on replicate ⬇️
📷 Introducing Wan Image – the fastest endpoint for generating beautiful 2K images! From Wan Video, we built Wan Image which generates stunning 2K images in just 3.4 seconds on a single H100 📷 Try it on @replicate: replicate.com/prunaai/wan-im… Read our blog for details, examples,…
📷 Introducing Wan Image – the fastest endpoint for generating beautiful 2K images! From Wan Video, we built Wan Image which generates stunning 2K images in just 3.4 seconds on a single H100 📷 Try it on @replicate: replicate.com/prunaai/wan-im… Read our blog for details, examples,…
We now have by far the fastest Flux Dev endpoint in the world sub 1s, try it here: replicate.com/prunaai/flux.1…
🧃Juicy updated from the Pruna team! We've just dropped some major improvements that'll make your models optimizations run smoother than ever: ⚡ 𝗚𝗣𝗨 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 𝗠𝗮𝗱𝗲 𝗘𝗮𝘀𝘆: Pruna now supports accelerate for models distributed across multiple GPUs.…
🇪🇺✈️🇺🇸 In SF next week. Optimizing AI models & handing out croissants to @ycombinator startups haunted by Soham. DM before the croissants vanish 🥐 @PrunaAI
🌱 Compressing a single AI model endpoint can save 2t CO2e per year! In comparison, a single EU person consumes ~10t CO2 per year. Last week, our compressed Flux-Schnell endpoint on @replicate has run 𝟮𝗠 𝘁𝗶𝗺𝗲𝘀 𝗼𝗻 𝗛𝟭𝟬𝟬 𝗼𝘃𝗲𝗿 𝟮 𝘄𝗲𝗲𝗸𝘀. For each run, the model…
FLUX.1 Kontext [dev] dropped just hours ago and the community is already hacking 👀 Our friends @PrunaAI made it 5x faster in just a few hours. This is what open-source is all about: remix, build, share. We love to see it! Run it here: replicate.com/prunaai/flux-k……
Black Forest Labs have released their much anticipated open source version of Kontext. FLUX.1 Kontext [dev] is now available on Replicate: replicate.com/black-forest-l… We love open source, and we can't wait to see what the community does with this.
> this is why the community is amazing actually shocked🫢 @PrunaAI optimised @bfl_ml's new FLUX.1 Kontext [dev] the OPEN WEIGHT version, and they've kindly pushed it up to @replicate ! (it's fast af🔥) big W opensauce
Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together…
FLUX.1 Kontext dev weights were released only hours ago and already the community is pushing out remixes onto the platform It’s simple: run Kontext Dev or its derivatives on @replicate, and you can use everything you generate, commercially!
Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together…
Our company retreat turned into a mini-hackathon when we got early access to the amazing Flux-Kontext-dev model from @bfl_ml. In a long night, we were able to make it 5× faster! Try our compressed version Replicate: replicate.com/prunaai/flux-k…
Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together…
We made the new open weights FLUX.1 Kontext [dev] model 5x faster on an H100 out of the box with pruna! Don't believe us ? Check it out here: replicate.com/prunaai/flux-k…
Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together…
Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together…